github.com/ncw/rclone@v1.48.1-0.20190724201158-a35aa1360e3e/MANUAL.txt (about)

     1  rclone(1) User Manual
     2  Nick Craig-Wood
     3  Jun 15, 2019
     4  
     5  
     6  
     7  RCLONE
     8  
     9  
    10  [Logo]
    11  
    12  Rclone is a command line program to sync files and directories to and
    13  from:
    14  
    15  -   Alibaba Cloud (Aliyun) Object Storage System (OSS)
    16  -   Amazon Drive (See note)
    17  -   Amazon S3
    18  -   Backblaze B2
    19  -   Box
    20  -   Ceph
    21  -   DigitalOcean Spaces
    22  -   Dreamhost
    23  -   Dropbox
    24  -   FTP
    25  -   Google Cloud Storage
    26  -   Google Drive
    27  -   HTTP
    28  -   Hubic
    29  -   Jottacloud
    30  -   IBM COS S3
    31  -   Koofr
    32  -   Memset Memstore
    33  -   Mega
    34  -   Microsoft Azure Blob Storage
    35  -   Microsoft OneDrive
    36  -   Minio
    37  -   Nextcloud
    38  -   OVH
    39  -   OpenDrive
    40  -   Openstack Swift
    41  -   Oracle Cloud Storage
    42  -   ownCloud
    43  -   pCloud
    44  -   put.io
    45  -   QingStor
    46  -   Rackspace Cloud Files
    47  -   rsync.net
    48  -   Scaleway
    49  -   SFTP
    50  -   Wasabi
    51  -   WebDAV
    52  -   Yandex Disk
    53  -   The local filesystem
    54  
    55  Features
    56  
    57  -   MD5/SHA1 hashes checked at all times for file integrity
    58  -   Timestamps preserved on files
    59  -   Partial syncs supported on a whole file basis
    60  -   Copy mode to just copy new/changed files
    61  -   Sync (one way) mode to make a directory identical
    62  -   Check mode to check for file hash equality
    63  -   Can sync to and from network, eg two different cloud accounts
    64  -   Encryption backend
    65  -   Cache backend
    66  -   Union backend
    67  -   Optional FUSE mount (rclone mount)
    68  -   Multi-threaded downloads to local disk
    69  -   Can serve local or remote files over HTTP/WebDav/FTP/SFTP/dlna
    70  
    71  Links
    72  
    73  -   Home page
    74  -   GitHub project page for source and bug tracker
    75  -   Rclone Forum
    76  -   Downloads
    77  
    78  
    79  
    80  INSTALL
    81  
    82  
    83  Rclone is a Go program and comes as a single binary file.
    84  
    85  
    86  Quickstart
    87  
    88  -   Download the relevant binary.
    89  -   Extract the rclone or rclone.exe binary from the archive
    90  -   Run rclone config to setup. See rclone config docs for more details.
    91  
    92  See below for some expanded Linux / macOS instructions.
    93  
    94  See the Usage section of the docs for how to use rclone, or run
    95  rclone -h.
    96  
    97  
    98  Script installation
    99  
   100  To install rclone on Linux/macOS/BSD systems, run:
   101  
   102      curl https://rclone.org/install.sh | sudo bash
   103  
   104  For beta installation, run:
   105  
   106      curl https://rclone.org/install.sh | sudo bash -s beta
   107  
   108  Note that this script checks the version of rclone installed first and
   109  won’t re-download if not needed.
   110  
   111  
   112  Linux installation from precompiled binary
   113  
   114  Fetch and unpack
   115  
   116      curl -O https://downloads.rclone.org/rclone-current-linux-amd64.zip
   117      unzip rclone-current-linux-amd64.zip
   118      cd rclone-*-linux-amd64
   119  
   120  Copy binary file
   121  
   122      sudo cp rclone /usr/bin/
   123      sudo chown root:root /usr/bin/rclone
   124      sudo chmod 755 /usr/bin/rclone
   125  
   126  Install manpage
   127  
   128      sudo mkdir -p /usr/local/share/man/man1
   129      sudo cp rclone.1 /usr/local/share/man/man1/
   130      sudo mandb 
   131  
   132  Run rclone config to setup. See rclone config docs for more details.
   133  
   134      rclone config
   135  
   136  
   137  macOS installation from precompiled binary
   138  
   139  Download the latest version of rclone.
   140  
   141      cd && curl -O https://downloads.rclone.org/rclone-current-osx-amd64.zip
   142  
   143  Unzip the download and cd to the extracted folder.
   144  
   145      unzip -a rclone-current-osx-amd64.zip && cd rclone-*-osx-amd64
   146  
   147  Move rclone to your $PATH. You will be prompted for your password.
   148  
   149      sudo mkdir -p /usr/local/bin
   150      sudo mv rclone /usr/local/bin/
   151  
   152  (the mkdir command is safe to run, even if the directory already
   153  exists).
   154  
   155  Remove the leftover files.
   156  
   157      cd .. && rm -rf rclone-*-osx-amd64 rclone-current-osx-amd64.zip
   158  
   159  Run rclone config to setup. See rclone config docs for more details.
   160  
   161      rclone config
   162  
   163  
   164  Install from source
   165  
   166  Make sure you have at least Go 1.7 installed. Download go if necessary.
   167  The latest release is recommended. Then
   168  
   169      git clone https://github.com/ncw/rclone.git
   170      cd rclone
   171      go build
   172      ./rclone version
   173  
   174  You can also build and install rclone in the GOPATH (which defaults to
   175  ~/go) with:
   176  
   177      go get -u -v github.com/ncw/rclone
   178  
   179  and this will build the binary in $GOPATH/bin (~/go/bin/rclone by
   180  default) after downloading the source to
   181  $GOPATH/src/github.com/ncw/rclone (~/go/src/github.com/ncw/rclone by
   182  default).
   183  
   184  
   185  Installation with Ansible
   186  
   187  This can be done with Stefan Weichinger’s ansible role.
   188  
   189  Instructions
   190  
   191  1.  git clone https://github.com/stefangweichinger/ansible-rclone.git
   192      into your local roles-directory
   193  2.  add the role to the hosts you want rclone installed to:
   194  
   195          - hosts: rclone-hosts
   196            roles:
   197                - rclone
   198  
   199  
   200  Configure
   201  
   202  First, you’ll need to configure rclone. As the object storage systems
   203  have quite complicated authentication these are kept in a config file.
   204  (See the --config entry for how to find the config file and choose its
   205  location.)
   206  
   207  The easiest way to make the config is to run rclone with the config
   208  option:
   209  
   210      rclone config
   211  
   212  See the following for detailed instructions for
   213  
   214  -   Alias
   215  -   Amazon Drive
   216  -   Amazon S3
   217  -   Backblaze B2
   218  -   Box
   219  -   Cache
   220  -   Crypt - to encrypt other remotes
   221  -   DigitalOcean Spaces
   222  -   Dropbox
   223  -   FTP
   224  -   Google Cloud Storage
   225  -   Google Drive
   226  -   HTTP
   227  -   Hubic
   228  -   Jottacloud
   229  -   Koofr
   230  -   Mega
   231  -   Microsoft Azure Blob Storage
   232  -   Microsoft OneDrive
   233  -   Openstack Swift / Rackspace Cloudfiles / Memset Memstore
   234  -   OpenDrive
   235  -   Pcloud
   236  -   QingStor
   237  -   SFTP
   238  -   Union
   239  -   WebDAV
   240  -   Yandex Disk
   241  -   The local filesystem
   242  
   243  
   244  Usage
   245  
   246  Rclone syncs a directory tree from one storage system to another.
   247  
   248  Its syntax is like this
   249  
   250      Syntax: [options] subcommand <parameters> <parameters...>
   251  
   252  Source and destination paths are specified by the name you gave the
   253  storage system in the config file then the sub path, eg “drive:myfolder”
   254  to look at “myfolder” in Google drive.
   255  
   256  You can define as many storage paths as you like in the config file.
   257  
   258  
   259  Subcommands
   260  
   261  rclone uses a system of subcommands. For example
   262  
   263      rclone ls remote:path # lists a remote
   264      rclone copy /local/path remote:path # copies /local/path to the remote
   265      rclone sync /local/path remote:path # syncs /local/path to the remote
   266  
   267  
   268  rclone config
   269  
   270  Enter an interactive configuration session.
   271  
   272  Synopsis
   273  
   274  Enter an interactive configuration session where you can setup new
   275  remotes and manage existing ones. You may also set or remove a password
   276  to protect your configuration.
   277  
   278      rclone config [flags]
   279  
   280  Options
   281  
   282        -h, --help   help for config
   283  
   284  SEE ALSO
   285  
   286  -   rclone - Show help for rclone commands, flags and backends.
   287  -   rclone config create - Create a new remote with name, type and
   288      options.
   289  -   rclone config delete - Delete an existing remote .
   290  -   rclone config dump - Dump the config file as JSON.
   291  -   rclone config edit - Enter an interactive configuration session.
   292  -   rclone config file - Show path of configuration file in use.
   293  -   rclone config password - Update password in an existing remote.
   294  -   rclone config providers - List in JSON format all the providers and
   295      options.
   296  -   rclone config show - Print (decrypted) config file, or the config
   297      for a single remote.
   298  -   rclone config update - Update options in an existing remote.
   299  
   300  Auto generated by spf13/cobra on 15-Jun-2019
   301  
   302  
   303  rclone copy
   304  
   305  Copy files from source to dest, skipping already copied
   306  
   307  Synopsis
   308  
   309  Copy the source to the destination. Doesn’t transfer unchanged files,
   310  testing by size and modification time or MD5SUM. Doesn’t delete files
   311  from the destination.
   312  
   313  Note that it is always the contents of the directory that is synced, not
   314  the directory so when source:path is a directory, it’s the contents of
   315  source:path that are copied, not the directory name and contents.
   316  
   317  If dest:path doesn’t exist, it is created and the source:path contents
   318  go there.
   319  
   320  For example
   321  
   322      rclone copy source:sourcepath dest:destpath
   323  
   324  Let’s say there are two files in sourcepath
   325  
   326      sourcepath/one.txt
   327      sourcepath/two.txt
   328  
   329  This copies them to
   330  
   331      destpath/one.txt
   332      destpath/two.txt
   333  
   334  Not to
   335  
   336      destpath/sourcepath/one.txt
   337      destpath/sourcepath/two.txt
   338  
   339  If you are familiar with rsync, rclone always works as if you had
   340  written a trailing / - meaning “copy the contents of this directory”.
   341  This applies to all commands and whether you are talking about the
   342  source or destination.
   343  
   344  See the –no-traverse option for controlling whether rclone lists the
   345  destination directory or not. Supplying this option when copying a small
   346  number of files into a large destination can speed transfers up greatly.
   347  
   348  For example, if you have many files in /path/to/src but only a few of
   349  them change every day, you can to copy all the files which have changed
   350  recently very efficiently like this:
   351  
   352      rclone copy --max-age 24h --no-traverse /path/to/src remote:
   353  
   354  NOTE: Use the -P/--progress flag to view real-time transfer statistics
   355  
   356      rclone copy source:path dest:path [flags]
   357  
   358  Options
   359  
   360            --create-empty-src-dirs   Create empty source dirs on destination after copy
   361        -h, --help                    help for copy
   362  
   363  SEE ALSO
   364  
   365  -   rclone - Show help for rclone commands, flags and backends.
   366  
   367  Auto generated by spf13/cobra on 15-Jun-2019
   368  
   369  
   370  rclone sync
   371  
   372  Make source and dest identical, modifying destination only.
   373  
   374  Synopsis
   375  
   376  Sync the source to the destination, changing the destination only.
   377  Doesn’t transfer unchanged files, testing by size and modification time
   378  or MD5SUM. Destination is updated to match source, including deleting
   379  files if necessary.
   380  
   381  IMPORTANT: Since this can cause data loss, test first with the --dry-run
   382  flag to see exactly what would be copied and deleted.
   383  
   384  Note that files in the destination won’t be deleted if there were any
   385  errors at any point.
   386  
   387  It is always the contents of the directory that is synced, not the
   388  directory so when source:path is a directory, it’s the contents of
   389  source:path that are copied, not the directory name and contents. See
   390  extended explanation in the copy command above if unsure.
   391  
   392  If dest:path doesn’t exist, it is created and the source:path contents
   393  go there.
   394  
   395  NOTE: Use the -P/--progress flag to view real-time transfer statistics
   396  
   397      rclone sync source:path dest:path [flags]
   398  
   399  Options
   400  
   401            --create-empty-src-dirs   Create empty source dirs on destination after sync
   402        -h, --help                    help for sync
   403  
   404  SEE ALSO
   405  
   406  -   rclone - Show help for rclone commands, flags and backends.
   407  
   408  Auto generated by spf13/cobra on 15-Jun-2019
   409  
   410  
   411  rclone move
   412  
   413  Move files from source to dest.
   414  
   415  Synopsis
   416  
   417  Moves the contents of the source directory to the destination directory.
   418  Rclone will error if the source and destination overlap and the remote
   419  does not support a server side directory move operation.
   420  
   421  If no filters are in use and if possible this will server side move
   422  source:path into dest:path. After this source:path will no longer longer
   423  exist.
   424  
   425  Otherwise for each file in source:path selected by the filters (if any)
   426  this will move it into dest:path. If possible a server side move will be
   427  used, otherwise it will copy it (server side if possible) into dest:path
   428  then delete the original (if no errors on copy) in source:path.
   429  
   430  If you want to delete empty source directories after move, use the
   431  –delete-empty-src-dirs flag.
   432  
   433  See the –no-traverse option for controlling whether rclone lists the
   434  destination directory or not. Supplying this option when moving a small
   435  number of files into a large destination can speed transfers up greatly.
   436  
   437  IMPORTANT: Since this can cause data loss, test first with the –dry-run
   438  flag.
   439  
   440  NOTE: Use the -P/--progress flag to view real-time transfer statistics.
   441  
   442      rclone move source:path dest:path [flags]
   443  
   444  Options
   445  
   446            --create-empty-src-dirs   Create empty source dirs on destination after move
   447            --delete-empty-src-dirs   Delete empty source dirs after move
   448        -h, --help                    help for move
   449  
   450  SEE ALSO
   451  
   452  -   rclone - Show help for rclone commands, flags and backends.
   453  
   454  Auto generated by spf13/cobra on 15-Jun-2019
   455  
   456  
   457  rclone delete
   458  
   459  Remove the contents of path.
   460  
   461  Synopsis
   462  
   463  Remove the files in path. Unlike purge it obeys include/exclude filters
   464  so can be used to selectively delete files.
   465  
   466  rclone delete only deletes objects but leaves the directory structure
   467  alone. If you want to delete a directory and all of its contents use
   468  rclone purge
   469  
   470  Eg delete all files bigger than 100MBytes
   471  
   472  Check what would be deleted first (use either)
   473  
   474      rclone --min-size 100M lsl remote:path
   475      rclone --dry-run --min-size 100M delete remote:path
   476  
   477  Then delete
   478  
   479      rclone --min-size 100M delete remote:path
   480  
   481  That reads “delete everything with a minimum size of 100 MB”, hence
   482  delete all files bigger than 100MBytes.
   483  
   484      rclone delete remote:path [flags]
   485  
   486  Options
   487  
   488        -h, --help   help for delete
   489  
   490  SEE ALSO
   491  
   492  -   rclone - Show help for rclone commands, flags and backends.
   493  
   494  Auto generated by spf13/cobra on 15-Jun-2019
   495  
   496  
   497  rclone purge
   498  
   499  Remove the path and all of its contents.
   500  
   501  Synopsis
   502  
   503  Remove the path and all of its contents. Note that this does not obey
   504  include/exclude filters - everything will be removed. Use delete if you
   505  want to selectively delete files.
   506  
   507      rclone purge remote:path [flags]
   508  
   509  Options
   510  
   511        -h, --help   help for purge
   512  
   513  SEE ALSO
   514  
   515  -   rclone - Show help for rclone commands, flags and backends.
   516  
   517  Auto generated by spf13/cobra on 15-Jun-2019
   518  
   519  
   520  rclone mkdir
   521  
   522  Make the path if it doesn’t already exist.
   523  
   524  Synopsis
   525  
   526  Make the path if it doesn’t already exist.
   527  
   528      rclone mkdir remote:path [flags]
   529  
   530  Options
   531  
   532        -h, --help   help for mkdir
   533  
   534  SEE ALSO
   535  
   536  -   rclone - Show help for rclone commands, flags and backends.
   537  
   538  Auto generated by spf13/cobra on 15-Jun-2019
   539  
   540  
   541  rclone rmdir
   542  
   543  Remove the path if empty.
   544  
   545  Synopsis
   546  
   547  Remove the path. Note that you can’t remove a path with objects in it,
   548  use purge for that.
   549  
   550      rclone rmdir remote:path [flags]
   551  
   552  Options
   553  
   554        -h, --help   help for rmdir
   555  
   556  SEE ALSO
   557  
   558  -   rclone - Show help for rclone commands, flags and backends.
   559  
   560  Auto generated by spf13/cobra on 15-Jun-2019
   561  
   562  
   563  rclone check
   564  
   565  Checks the files in the source and destination match.
   566  
   567  Synopsis
   568  
   569  Checks the files in the source and destination match. It compares sizes
   570  and hashes (MD5 or SHA1) and logs a report of files which don’t match.
   571  It doesn’t alter the source or destination.
   572  
   573  If you supply the –size-only flag, it will only compare the sizes not
   574  the hashes as well. Use this for a quick check.
   575  
   576  If you supply the –download flag, it will download the data from both
   577  remotes and check them against each other on the fly. This can be useful
   578  for remotes that don’t support hashes or if you really want to check all
   579  the data.
   580  
   581  If you supply the –one-way flag, it will only check that files in source
   582  match the files in destination, not the other way around. Meaning extra
   583  files in destination that are not in the source will not trigger an
   584  error.
   585  
   586      rclone check source:path dest:path [flags]
   587  
   588  Options
   589  
   590            --download   Check by downloading rather than with hash.
   591        -h, --help       help for check
   592            --one-way    Check one way only, source files must exist on remote
   593  
   594  SEE ALSO
   595  
   596  -   rclone - Show help for rclone commands, flags and backends.
   597  
   598  Auto generated by spf13/cobra on 15-Jun-2019
   599  
   600  
   601  rclone ls
   602  
   603  List the objects in the path with size and path.
   604  
   605  Synopsis
   606  
   607  Lists the objects in the source path to standard output in a human
   608  readable format with size and path. Recurses by default.
   609  
   610  Eg
   611  
   612      $ rclone ls swift:bucket
   613          60295 bevajer5jef
   614          90613 canole
   615          94467 diwogej7
   616          37600 fubuwic
   617  
   618  Any of the filtering options can be applied to this command.
   619  
   620  There are several related list commands
   621  
   622  -   ls to list size and path of objects only
   623  -   lsl to list modification time, size and path of objects only
   624  -   lsd to list directories only
   625  -   lsf to list objects and directories in easy to parse format
   626  -   lsjson to list objects and directories in JSON format
   627  
   628  ls,lsl,lsd are designed to be human readable. lsf is designed to be
   629  human and machine readable. lsjson is designed to be machine readable.
   630  
   631  Note that ls and lsl recurse by default - use “–max-depth 1” to stop the
   632  recursion.
   633  
   634  The other list commands lsd,lsf,lsjson do not recurse by default - use
   635  “-R” to make them recurse.
   636  
   637  Listing a non existent directory will produce an error except for
   638  remotes which can’t have empty directories (eg s3, swift, gcs, etc - the
   639  bucket based remotes).
   640  
   641      rclone ls remote:path [flags]
   642  
   643  Options
   644  
   645        -h, --help   help for ls
   646  
   647  SEE ALSO
   648  
   649  -   rclone - Show help for rclone commands, flags and backends.
   650  
   651  Auto generated by spf13/cobra on 15-Jun-2019
   652  
   653  
   654  rclone lsd
   655  
   656  List all directories/containers/buckets in the path.
   657  
   658  Synopsis
   659  
   660  Lists the directories in the source path to standard output. Does not
   661  recurse by default. Use the -R flag to recurse.
   662  
   663  This command lists the total size of the directory (if known, -1 if
   664  not), the modification time (if known, the current time if not), the
   665  number of objects in the directory (if known, -1 if not) and the name of
   666  the directory, Eg
   667  
   668      $ rclone lsd swift:
   669            494000 2018-04-26 08:43:20     10000 10000files
   670                65 2018-04-26 08:43:20         1 1File
   671  
   672  Or
   673  
   674      $ rclone lsd drive:test
   675                -1 2016-10-17 17:41:53        -1 1000files
   676                -1 2017-01-03 14:40:54        -1 2500files
   677                -1 2017-07-08 14:39:28        -1 4000files
   678  
   679  If you just want the directory names use “rclone lsf –dirs-only”.
   680  
   681  Any of the filtering options can be applied to this command.
   682  
   683  There are several related list commands
   684  
   685  -   ls to list size and path of objects only
   686  -   lsl to list modification time, size and path of objects only
   687  -   lsd to list directories only
   688  -   lsf to list objects and directories in easy to parse format
   689  -   lsjson to list objects and directories in JSON format
   690  
   691  ls,lsl,lsd are designed to be human readable. lsf is designed to be
   692  human and machine readable. lsjson is designed to be machine readable.
   693  
   694  Note that ls and lsl recurse by default - use “–max-depth 1” to stop the
   695  recursion.
   696  
   697  The other list commands lsd,lsf,lsjson do not recurse by default - use
   698  “-R” to make them recurse.
   699  
   700  Listing a non existent directory will produce an error except for
   701  remotes which can’t have empty directories (eg s3, swift, gcs, etc - the
   702  bucket based remotes).
   703  
   704      rclone lsd remote:path [flags]
   705  
   706  Options
   707  
   708        -h, --help        help for lsd
   709        -R, --recursive   Recurse into the listing.
   710  
   711  SEE ALSO
   712  
   713  -   rclone - Show help for rclone commands, flags and backends.
   714  
   715  Auto generated by spf13/cobra on 15-Jun-2019
   716  
   717  
   718  rclone lsl
   719  
   720  List the objects in path with modification time, size and path.
   721  
   722  Synopsis
   723  
   724  Lists the objects in the source path to standard output in a human
   725  readable format with modification time, size and path. Recurses by
   726  default.
   727  
   728  Eg
   729  
   730      $ rclone lsl swift:bucket
   731          60295 2016-06-25 18:55:41.062626927 bevajer5jef
   732          90613 2016-06-25 18:55:43.302607074 canole
   733          94467 2016-06-25 18:55:43.046609333 diwogej7
   734          37600 2016-06-25 18:55:40.814629136 fubuwic
   735  
   736  Any of the filtering options can be applied to this command.
   737  
   738  There are several related list commands
   739  
   740  -   ls to list size and path of objects only
   741  -   lsl to list modification time, size and path of objects only
   742  -   lsd to list directories only
   743  -   lsf to list objects and directories in easy to parse format
   744  -   lsjson to list objects and directories in JSON format
   745  
   746  ls,lsl,lsd are designed to be human readable. lsf is designed to be
   747  human and machine readable. lsjson is designed to be machine readable.
   748  
   749  Note that ls and lsl recurse by default - use “–max-depth 1” to stop the
   750  recursion.
   751  
   752  The other list commands lsd,lsf,lsjson do not recurse by default - use
   753  “-R” to make them recurse.
   754  
   755  Listing a non existent directory will produce an error except for
   756  remotes which can’t have empty directories (eg s3, swift, gcs, etc - the
   757  bucket based remotes).
   758  
   759      rclone lsl remote:path [flags]
   760  
   761  Options
   762  
   763        -h, --help   help for lsl
   764  
   765  SEE ALSO
   766  
   767  -   rclone - Show help for rclone commands, flags and backends.
   768  
   769  Auto generated by spf13/cobra on 15-Jun-2019
   770  
   771  
   772  rclone md5sum
   773  
   774  Produces an md5sum file for all the objects in the path.
   775  
   776  Synopsis
   777  
   778  Produces an md5sum file for all the objects in the path. This is in the
   779  same format as the standard md5sum tool produces.
   780  
   781      rclone md5sum remote:path [flags]
   782  
   783  Options
   784  
   785        -h, --help   help for md5sum
   786  
   787  SEE ALSO
   788  
   789  -   rclone - Show help for rclone commands, flags and backends.
   790  
   791  Auto generated by spf13/cobra on 15-Jun-2019
   792  
   793  
   794  rclone sha1sum
   795  
   796  Produces an sha1sum file for all the objects in the path.
   797  
   798  Synopsis
   799  
   800  Produces an sha1sum file for all the objects in the path. This is in the
   801  same format as the standard sha1sum tool produces.
   802  
   803      rclone sha1sum remote:path [flags]
   804  
   805  Options
   806  
   807        -h, --help   help for sha1sum
   808  
   809  SEE ALSO
   810  
   811  -   rclone - Show help for rclone commands, flags and backends.
   812  
   813  Auto generated by spf13/cobra on 15-Jun-2019
   814  
   815  
   816  rclone size
   817  
   818  Prints the total size and number of objects in remote:path.
   819  
   820  Synopsis
   821  
   822  Prints the total size and number of objects in remote:path.
   823  
   824      rclone size remote:path [flags]
   825  
   826  Options
   827  
   828        -h, --help   help for size
   829            --json   format output as JSON
   830  
   831  SEE ALSO
   832  
   833  -   rclone - Show help for rclone commands, flags and backends.
   834  
   835  Auto generated by spf13/cobra on 15-Jun-2019
   836  
   837  
   838  rclone version
   839  
   840  Show the version number.
   841  
   842  Synopsis
   843  
   844  Show the version number, the go version and the architecture.
   845  
   846  Eg
   847  
   848      $ rclone version
   849      rclone v1.41
   850      - os/arch: linux/amd64
   851      - go version: go1.10
   852  
   853  If you supply the –check flag, then it will do an online check to
   854  compare your version with the latest release and the latest beta.
   855  
   856      $ rclone version --check
   857      yours:  1.42.0.6
   858      latest: 1.42          (released 2018-06-16)
   859      beta:   1.42.0.5      (released 2018-06-17)
   860  
   861  Or
   862  
   863      $ rclone version --check
   864      yours:  1.41
   865      latest: 1.42          (released 2018-06-16)
   866        upgrade: https://downloads.rclone.org/v1.42
   867      beta:   1.42.0.5      (released 2018-06-17)
   868        upgrade: https://beta.rclone.org/v1.42-005-g56e1e820
   869  
   870      rclone version [flags]
   871  
   872  Options
   873  
   874            --check   Check for new version.
   875        -h, --help    help for version
   876  
   877  SEE ALSO
   878  
   879  -   rclone - Show help for rclone commands, flags and backends.
   880  
   881  Auto generated by spf13/cobra on 15-Jun-2019
   882  
   883  
   884  rclone cleanup
   885  
   886  Clean up the remote if possible
   887  
   888  Synopsis
   889  
   890  Clean up the remote if possible. Empty the trash or delete old file
   891  versions. Not supported by all remotes.
   892  
   893      rclone cleanup remote:path [flags]
   894  
   895  Options
   896  
   897        -h, --help   help for cleanup
   898  
   899  SEE ALSO
   900  
   901  -   rclone - Show help for rclone commands, flags and backends.
   902  
   903  Auto generated by spf13/cobra on 15-Jun-2019
   904  
   905  
   906  rclone dedupe
   907  
   908  Interactively find duplicate files and delete/rename them.
   909  
   910  Synopsis
   911  
   912  By default dedupe interactively finds duplicate files and offers to
   913  delete all but one or rename them to be different. Only useful with
   914  Google Drive which can have duplicate file names.
   915  
   916  In the first pass it will merge directories with the same name. It will
   917  do this iteratively until all the identical directories have been
   918  merged.
   919  
   920  The dedupe command will delete all but one of any identical (same
   921  md5sum) files it finds without confirmation. This means that for most
   922  duplicated files the dedupe command will not be interactive. You can use
   923  --dry-run to see what would happen without doing anything.
   924  
   925  Here is an example run.
   926  
   927  Before - with duplicates
   928  
   929      $ rclone lsl drive:dupes
   930        6048320 2016-03-05 16:23:16.798000000 one.txt
   931        6048320 2016-03-05 16:23:11.775000000 one.txt
   932         564374 2016-03-05 16:23:06.731000000 one.txt
   933        6048320 2016-03-05 16:18:26.092000000 one.txt
   934        6048320 2016-03-05 16:22:46.185000000 two.txt
   935        1744073 2016-03-05 16:22:38.104000000 two.txt
   936         564374 2016-03-05 16:22:52.118000000 two.txt
   937  
   938  Now the dedupe session
   939  
   940      $ rclone dedupe drive:dupes
   941      2016/03/05 16:24:37 Google drive root 'dupes': Looking for duplicates using interactive mode.
   942      one.txt: Found 4 duplicates - deleting identical copies
   943      one.txt: Deleting 2/3 identical duplicates (md5sum "1eedaa9fe86fd4b8632e2ac549403b36")
   944      one.txt: 2 duplicates remain
   945        1:      6048320 bytes, 2016-03-05 16:23:16.798000000, md5sum 1eedaa9fe86fd4b8632e2ac549403b36
   946        2:       564374 bytes, 2016-03-05 16:23:06.731000000, md5sum 7594e7dc9fc28f727c42ee3e0749de81
   947      s) Skip and do nothing
   948      k) Keep just one (choose which in next step)
   949      r) Rename all to be different (by changing file.jpg to file-1.jpg)
   950      s/k/r> k
   951      Enter the number of the file to keep> 1
   952      one.txt: Deleted 1 extra copies
   953      two.txt: Found 3 duplicates - deleting identical copies
   954      two.txt: 3 duplicates remain
   955        1:       564374 bytes, 2016-03-05 16:22:52.118000000, md5sum 7594e7dc9fc28f727c42ee3e0749de81
   956        2:      6048320 bytes, 2016-03-05 16:22:46.185000000, md5sum 1eedaa9fe86fd4b8632e2ac549403b36
   957        3:      1744073 bytes, 2016-03-05 16:22:38.104000000, md5sum 851957f7fb6f0bc4ce76be966d336802
   958      s) Skip and do nothing
   959      k) Keep just one (choose which in next step)
   960      r) Rename all to be different (by changing file.jpg to file-1.jpg)
   961      s/k/r> r
   962      two-1.txt: renamed from: two.txt
   963      two-2.txt: renamed from: two.txt
   964      two-3.txt: renamed from: two.txt
   965  
   966  The result being
   967  
   968      $ rclone lsl drive:dupes
   969        6048320 2016-03-05 16:23:16.798000000 one.txt
   970         564374 2016-03-05 16:22:52.118000000 two-1.txt
   971        6048320 2016-03-05 16:22:46.185000000 two-2.txt
   972        1744073 2016-03-05 16:22:38.104000000 two-3.txt
   973  
   974  Dedupe can be run non interactively using the --dedupe-mode flag or by
   975  using an extra parameter with the same value
   976  
   977  -   --dedupe-mode interactive - interactive as above.
   978  -   --dedupe-mode skip - removes identical files then skips anything
   979      left.
   980  -   --dedupe-mode first - removes identical files then keeps the first
   981      one.
   982  -   --dedupe-mode newest - removes identical files then keeps the newest
   983      one.
   984  -   --dedupe-mode oldest - removes identical files then keeps the oldest
   985      one.
   986  -   --dedupe-mode largest - removes identical files then keeps the
   987      largest one.
   988  -   --dedupe-mode rename - removes identical files then renames the rest
   989      to be different.
   990  
   991  For example to rename all the identically named photos in your Google
   992  Photos directory, do
   993  
   994      rclone dedupe --dedupe-mode rename "drive:Google Photos"
   995  
   996  Or
   997  
   998      rclone dedupe rename "drive:Google Photos"
   999  
  1000      rclone dedupe [mode] remote:path [flags]
  1001  
  1002  Options
  1003  
  1004            --dedupe-mode string   Dedupe mode interactive|skip|first|newest|oldest|rename. (default "interactive")
  1005        -h, --help                 help for dedupe
  1006  
  1007  SEE ALSO
  1008  
  1009  -   rclone - Show help for rclone commands, flags and backends.
  1010  
  1011  Auto generated by spf13/cobra on 15-Jun-2019
  1012  
  1013  
  1014  rclone about
  1015  
  1016  Get quota information from the remote.
  1017  
  1018  Synopsis
  1019  
  1020  Get quota information from the remote, like bytes used/free/quota and
  1021  bytes used in the trash. Not supported by all remotes.
  1022  
  1023  This will print to stdout something like this:
  1024  
  1025      Total:   17G
  1026      Used:    7.444G
  1027      Free:    1.315G
  1028      Trashed: 100.000M
  1029      Other:   8.241G
  1030  
  1031  Where the fields are:
  1032  
  1033  -   Total: total size available.
  1034  -   Used: total size used
  1035  -   Free: total amount this user could upload.
  1036  -   Trashed: total amount in the trash
  1037  -   Other: total amount in other storage (eg Gmail, Google Photos)
  1038  -   Objects: total number of objects in the storage
  1039  
  1040  Note that not all the backends provide all the fields - they will be
  1041  missing if they are not known for that backend. Where it is known that
  1042  the value is unlimited the value will also be omitted.
  1043  
  1044  Use the –full flag to see the numbers written out in full, eg
  1045  
  1046      Total:   18253611008
  1047      Used:    7993453766
  1048      Free:    1411001220
  1049      Trashed: 104857602
  1050      Other:   8849156022
  1051  
  1052  Use the –json flag for a computer readable output, eg
  1053  
  1054      {
  1055          "total": 18253611008,
  1056          "used": 7993453766,
  1057          "trashed": 104857602,
  1058          "other": 8849156022,
  1059          "free": 1411001220
  1060      }
  1061  
  1062      rclone about remote: [flags]
  1063  
  1064  Options
  1065  
  1066            --full   Full numbers instead of SI units
  1067        -h, --help   help for about
  1068            --json   Format output as JSON
  1069  
  1070  SEE ALSO
  1071  
  1072  -   rclone - Show help for rclone commands, flags and backends.
  1073  
  1074  Auto generated by spf13/cobra on 15-Jun-2019
  1075  
  1076  
  1077  rclone authorize
  1078  
  1079  Remote authorization.
  1080  
  1081  Synopsis
  1082  
  1083  Remote authorization. Used to authorize a remote or headless rclone from
  1084  a machine with a browser - use as instructed by rclone config.
  1085  
  1086      rclone authorize [flags]
  1087  
  1088  Options
  1089  
  1090        -h, --help   help for authorize
  1091  
  1092  SEE ALSO
  1093  
  1094  -   rclone - Show help for rclone commands, flags and backends.
  1095  
  1096  Auto generated by spf13/cobra on 15-Jun-2019
  1097  
  1098  
  1099  rclone cachestats
  1100  
  1101  Print cache stats for a remote
  1102  
  1103  Synopsis
  1104  
  1105  Print cache stats for a remote in JSON format
  1106  
  1107      rclone cachestats source: [flags]
  1108  
  1109  Options
  1110  
  1111        -h, --help   help for cachestats
  1112  
  1113  SEE ALSO
  1114  
  1115  -   rclone - Show help for rclone commands, flags and backends.
  1116  
  1117  Auto generated by spf13/cobra on 15-Jun-2019
  1118  
  1119  
  1120  rclone cat
  1121  
  1122  Concatenates any files and sends them to stdout.
  1123  
  1124  Synopsis
  1125  
  1126  rclone cat sends any files to standard output.
  1127  
  1128  You can use it like this to output a single file
  1129  
  1130      rclone cat remote:path/to/file
  1131  
  1132  Or like this to output any file in dir or subdirectories.
  1133  
  1134      rclone cat remote:path/to/dir
  1135  
  1136  Or like this to output any .txt files in dir or subdirectories.
  1137  
  1138      rclone --include "*.txt" cat remote:path/to/dir
  1139  
  1140  Use the –head flag to print characters only at the start, –tail for the
  1141  end and –offset and –count to print a section in the middle. Note that
  1142  if offset is negative it will count from the end, so –offset -1 –count 1
  1143  is equivalent to –tail 1.
  1144  
  1145      rclone cat remote:path [flags]
  1146  
  1147  Options
  1148  
  1149            --count int    Only print N characters. (default -1)
  1150            --discard      Discard the output instead of printing.
  1151            --head int     Only print the first N characters.
  1152        -h, --help         help for cat
  1153            --offset int   Start printing at offset N (or from end if -ve).
  1154            --tail int     Only print the last N characters.
  1155  
  1156  SEE ALSO
  1157  
  1158  -   rclone - Show help for rclone commands, flags and backends.
  1159  
  1160  Auto generated by spf13/cobra on 15-Jun-2019
  1161  
  1162  
  1163  rclone config create
  1164  
  1165  Create a new remote with name, type and options.
  1166  
  1167  Synopsis
  1168  
  1169  Create a new remote of with and options. The options should be passed in
  1170  in pairs of .
  1171  
  1172  For example to make a swift remote of name myremote using auto config
  1173  you would do:
  1174  
  1175      rclone config create myremote swift env_auth true
  1176  
  1177  Note that if the config process would normally ask a question the
  1178  default is taken. Each time that happens rclone will print a message
  1179  saying how to affect the value taken.
  1180  
  1181  If any of the parameters passed is a password field, then rclone will
  1182  automatically obscure them before putting them in the config file.
  1183  
  1184  So for example if you wanted to configure a Google Drive remote but
  1185  using remote authorization you would do this:
  1186  
  1187      rclone config create mydrive drive config_is_local false
  1188  
  1189      rclone config create <name> <type> [<key> <value>]* [flags]
  1190  
  1191  Options
  1192  
  1193        -h, --help   help for create
  1194  
  1195  SEE ALSO
  1196  
  1197  -   rclone config - Enter an interactive configuration session.
  1198  
  1199  Auto generated by spf13/cobra on 15-Jun-2019
  1200  
  1201  
  1202  rclone config delete
  1203  
  1204  Delete an existing remote .
  1205  
  1206  Synopsis
  1207  
  1208  Delete an existing remote .
  1209  
  1210      rclone config delete <name> [flags]
  1211  
  1212  Options
  1213  
  1214        -h, --help   help for delete
  1215  
  1216  SEE ALSO
  1217  
  1218  -   rclone config - Enter an interactive configuration session.
  1219  
  1220  Auto generated by spf13/cobra on 15-Jun-2019
  1221  
  1222  
  1223  rclone config dump
  1224  
  1225  Dump the config file as JSON.
  1226  
  1227  Synopsis
  1228  
  1229  Dump the config file as JSON.
  1230  
  1231      rclone config dump [flags]
  1232  
  1233  Options
  1234  
  1235        -h, --help   help for dump
  1236  
  1237  SEE ALSO
  1238  
  1239  -   rclone config - Enter an interactive configuration session.
  1240  
  1241  Auto generated by spf13/cobra on 15-Jun-2019
  1242  
  1243  
  1244  rclone config edit
  1245  
  1246  Enter an interactive configuration session.
  1247  
  1248  Synopsis
  1249  
  1250  Enter an interactive configuration session where you can setup new
  1251  remotes and manage existing ones. You may also set or remove a password
  1252  to protect your configuration.
  1253  
  1254      rclone config edit [flags]
  1255  
  1256  Options
  1257  
  1258        -h, --help   help for edit
  1259  
  1260  SEE ALSO
  1261  
  1262  -   rclone config - Enter an interactive configuration session.
  1263  
  1264  Auto generated by spf13/cobra on 15-Jun-2019
  1265  
  1266  
  1267  rclone config file
  1268  
  1269  Show path of configuration file in use.
  1270  
  1271  Synopsis
  1272  
  1273  Show path of configuration file in use.
  1274  
  1275      rclone config file [flags]
  1276  
  1277  Options
  1278  
  1279        -h, --help   help for file
  1280  
  1281  SEE ALSO
  1282  
  1283  -   rclone config - Enter an interactive configuration session.
  1284  
  1285  Auto generated by spf13/cobra on 15-Jun-2019
  1286  
  1287  
  1288  rclone config password
  1289  
  1290  Update password in an existing remote.
  1291  
  1292  Synopsis
  1293  
  1294  Update an existing remote’s password. The password should be passed in
  1295  in pairs of .
  1296  
  1297  For example to set password of a remote of name myremote you would do:
  1298  
  1299      rclone config password myremote fieldname mypassword
  1300  
  1301  This command is obsolete now that “config update” and “config create”
  1302  both support obscuring passwords directly.
  1303  
  1304      rclone config password <name> [<key> <value>]+ [flags]
  1305  
  1306  Options
  1307  
  1308        -h, --help   help for password
  1309  
  1310  SEE ALSO
  1311  
  1312  -   rclone config - Enter an interactive configuration session.
  1313  
  1314  Auto generated by spf13/cobra on 15-Jun-2019
  1315  
  1316  
  1317  rclone config providers
  1318  
  1319  List in JSON format all the providers and options.
  1320  
  1321  Synopsis
  1322  
  1323  List in JSON format all the providers and options.
  1324  
  1325      rclone config providers [flags]
  1326  
  1327  Options
  1328  
  1329        -h, --help   help for providers
  1330  
  1331  SEE ALSO
  1332  
  1333  -   rclone config - Enter an interactive configuration session.
  1334  
  1335  Auto generated by spf13/cobra on 15-Jun-2019
  1336  
  1337  
  1338  rclone config show
  1339  
  1340  Print (decrypted) config file, or the config for a single remote.
  1341  
  1342  Synopsis
  1343  
  1344  Print (decrypted) config file, or the config for a single remote.
  1345  
  1346      rclone config show [<remote>] [flags]
  1347  
  1348  Options
  1349  
  1350        -h, --help   help for show
  1351  
  1352  SEE ALSO
  1353  
  1354  -   rclone config - Enter an interactive configuration session.
  1355  
  1356  Auto generated by spf13/cobra on 15-Jun-2019
  1357  
  1358  
  1359  rclone config update
  1360  
  1361  Update options in an existing remote.
  1362  
  1363  Synopsis
  1364  
  1365  Update an existing remote’s options. The options should be passed in in
  1366  pairs of .
  1367  
  1368  For example to update the env_auth field of a remote of name myremote
  1369  you would do:
  1370  
  1371      rclone config update myremote swift env_auth true
  1372  
  1373  If any of the parameters passed is a password field, then rclone will
  1374  automatically obscure them before putting them in the config file.
  1375  
  1376  If the remote uses oauth the token will be updated, if you don’t require
  1377  this add an extra parameter thus:
  1378  
  1379      rclone config update myremote swift env_auth true config_refresh_token false
  1380  
  1381      rclone config update <name> [<key> <value>]+ [flags]
  1382  
  1383  Options
  1384  
  1385        -h, --help   help for update
  1386  
  1387  SEE ALSO
  1388  
  1389  -   rclone config - Enter an interactive configuration session.
  1390  
  1391  Auto generated by spf13/cobra on 15-Jun-2019
  1392  
  1393  
  1394  rclone copyto
  1395  
  1396  Copy files from source to dest, skipping already copied
  1397  
  1398  Synopsis
  1399  
  1400  If source:path is a file or directory then it copies it to a file or
  1401  directory named dest:path.
  1402  
  1403  This can be used to upload single files to other than their current
  1404  name. If the source is a directory then it acts exactly like the copy
  1405  command.
  1406  
  1407  So
  1408  
  1409      rclone copyto src dst
  1410  
  1411  where src and dst are rclone paths, either remote:path or /path/to/local
  1412  or C:.
  1413  
  1414  This will:
  1415  
  1416      if src is file
  1417          copy it to dst, overwriting an existing file if it exists
  1418      if src is directory
  1419          copy it to dst, overwriting existing files if they exist
  1420          see copy command for full details
  1421  
  1422  This doesn’t transfer unchanged files, testing by size and modification
  1423  time or MD5SUM. It doesn’t delete files from the destination.
  1424  
  1425  NOTE: Use the -P/--progress flag to view real-time transfer statistics
  1426  
  1427      rclone copyto source:path dest:path [flags]
  1428  
  1429  Options
  1430  
  1431        -h, --help   help for copyto
  1432  
  1433  SEE ALSO
  1434  
  1435  -   rclone - Show help for rclone commands, flags and backends.
  1436  
  1437  Auto generated by spf13/cobra on 15-Jun-2019
  1438  
  1439  
  1440  rclone copyurl
  1441  
  1442  Copy url content to dest.
  1443  
  1444  Synopsis
  1445  
  1446  Download urls content and copy it to destination without saving it in
  1447  tmp storage.
  1448  
  1449      rclone copyurl https://example.com dest:path [flags]
  1450  
  1451  Options
  1452  
  1453        -h, --help   help for copyurl
  1454  
  1455  SEE ALSO
  1456  
  1457  -   rclone - Show help for rclone commands, flags and backends.
  1458  
  1459  Auto generated by spf13/cobra on 15-Jun-2019
  1460  
  1461  
  1462  rclone cryptcheck
  1463  
  1464  Cryptcheck checks the integrity of a crypted remote.
  1465  
  1466  Synopsis
  1467  
  1468  rclone cryptcheck checks a remote against a crypted remote. This is the
  1469  equivalent of running rclone check, but able to check the checksums of
  1470  the crypted remote.
  1471  
  1472  For it to work the underlying remote of the cryptedremote must support
  1473  some kind of checksum.
  1474  
  1475  It works by reading the nonce from each file on the cryptedremote: and
  1476  using that to encrypt each file on the remote:. It then checks the
  1477  checksum of the underlying file on the cryptedremote: against the
  1478  checksum of the file it has just encrypted.
  1479  
  1480  Use it like this
  1481  
  1482      rclone cryptcheck /path/to/files encryptedremote:path
  1483  
  1484  You can use it like this also, but that will involve downloading all the
  1485  files in remote:path.
  1486  
  1487      rclone cryptcheck remote:path encryptedremote:path
  1488  
  1489  After it has run it will log the status of the encryptedremote:.
  1490  
  1491  If you supply the –one-way flag, it will only check that files in source
  1492  match the files in destination, not the other way around. Meaning extra
  1493  files in destination that are not in the source will not trigger an
  1494  error.
  1495  
  1496      rclone cryptcheck remote:path cryptedremote:path [flags]
  1497  
  1498  Options
  1499  
  1500        -h, --help      help for cryptcheck
  1501            --one-way   Check one way only, source files must exist on destination
  1502  
  1503  SEE ALSO
  1504  
  1505  -   rclone - Show help for rclone commands, flags and backends.
  1506  
  1507  Auto generated by spf13/cobra on 15-Jun-2019
  1508  
  1509  
  1510  rclone cryptdecode
  1511  
  1512  Cryptdecode returns unencrypted file names.
  1513  
  1514  Synopsis
  1515  
  1516  rclone cryptdecode returns unencrypted file names when provided with a
  1517  list of encrypted file names. List limit is 10 items.
  1518  
  1519  If you supply the –reverse flag, it will return encrypted file names.
  1520  
  1521  use it like this
  1522  
  1523      rclone cryptdecode encryptedremote: encryptedfilename1 encryptedfilename2
  1524  
  1525      rclone cryptdecode --reverse encryptedremote: filename1 filename2
  1526  
  1527      rclone cryptdecode encryptedremote: encryptedfilename [flags]
  1528  
  1529  Options
  1530  
  1531        -h, --help      help for cryptdecode
  1532            --reverse   Reverse cryptdecode, encrypts filenames
  1533  
  1534  SEE ALSO
  1535  
  1536  -   rclone - Show help for rclone commands, flags and backends.
  1537  
  1538  Auto generated by spf13/cobra on 15-Jun-2019
  1539  
  1540  
  1541  rclone dbhashsum
  1542  
  1543  Produces a Dropbox hash file for all the objects in the path.
  1544  
  1545  Synopsis
  1546  
  1547  Produces a Dropbox hash file for all the objects in the path. The hashes
  1548  are calculated according to Dropbox content hash rules. The output is in
  1549  the same format as md5sum and sha1sum.
  1550  
  1551      rclone dbhashsum remote:path [flags]
  1552  
  1553  Options
  1554  
  1555        -h, --help   help for dbhashsum
  1556  
  1557  SEE ALSO
  1558  
  1559  -   rclone - Show help for rclone commands, flags and backends.
  1560  
  1561  Auto generated by spf13/cobra on 15-Jun-2019
  1562  
  1563  
  1564  rclone deletefile
  1565  
  1566  Remove a single file from remote.
  1567  
  1568  Synopsis
  1569  
  1570  Remove a single file from remote. Unlike delete it cannot be used to
  1571  remove a directory and it doesn’t obey include/exclude filters - if the
  1572  specified file exists, it will always be removed.
  1573  
  1574      rclone deletefile remote:path [flags]
  1575  
  1576  Options
  1577  
  1578        -h, --help   help for deletefile
  1579  
  1580  SEE ALSO
  1581  
  1582  -   rclone - Show help for rclone commands, flags and backends.
  1583  
  1584  Auto generated by spf13/cobra on 15-Jun-2019
  1585  
  1586  
  1587  rclone genautocomplete
  1588  
  1589  Output completion script for a given shell.
  1590  
  1591  Synopsis
  1592  
  1593  Generates a shell completion script for rclone. Run with –help to list
  1594  the supported shells.
  1595  
  1596  Options
  1597  
  1598        -h, --help   help for genautocomplete
  1599  
  1600  SEE ALSO
  1601  
  1602  -   rclone - Show help for rclone commands, flags and backends.
  1603  -   rclone genautocomplete bash - Output bash completion script for
  1604      rclone.
  1605  -   rclone genautocomplete zsh - Output zsh completion script for
  1606      rclone.
  1607  
  1608  Auto generated by spf13/cobra on 15-Jun-2019
  1609  
  1610  
  1611  rclone genautocomplete bash
  1612  
  1613  Output bash completion script for rclone.
  1614  
  1615  Synopsis
  1616  
  1617  Generates a bash shell autocompletion script for rclone.
  1618  
  1619  This writes to /etc/bash_completion.d/rclone by default so will probably
  1620  need to be run with sudo or as root, eg
  1621  
  1622      sudo rclone genautocomplete bash
  1623  
  1624  Logout and login again to use the autocompletion scripts, or source them
  1625  directly
  1626  
  1627      . /etc/bash_completion
  1628  
  1629  If you supply a command line argument the script will be written there.
  1630  
  1631      rclone genautocomplete bash [output_file] [flags]
  1632  
  1633  Options
  1634  
  1635        -h, --help   help for bash
  1636  
  1637  SEE ALSO
  1638  
  1639  -   rclone genautocomplete - Output completion script for a given shell.
  1640  
  1641  Auto generated by spf13/cobra on 15-Jun-2019
  1642  
  1643  
  1644  rclone genautocomplete zsh
  1645  
  1646  Output zsh completion script for rclone.
  1647  
  1648  Synopsis
  1649  
  1650  Generates a zsh autocompletion script for rclone.
  1651  
  1652  This writes to /usr/share/zsh/vendor-completions/_rclone by default so
  1653  will probably need to be run with sudo or as root, eg
  1654  
  1655      sudo rclone genautocomplete zsh
  1656  
  1657  Logout and login again to use the autocompletion scripts, or source them
  1658  directly
  1659  
  1660      autoload -U compinit && compinit
  1661  
  1662  If you supply a command line argument the script will be written there.
  1663  
  1664      rclone genautocomplete zsh [output_file] [flags]
  1665  
  1666  Options
  1667  
  1668        -h, --help   help for zsh
  1669  
  1670  SEE ALSO
  1671  
  1672  -   rclone genautocomplete - Output completion script for a given shell.
  1673  
  1674  Auto generated by spf13/cobra on 15-Jun-2019
  1675  
  1676  
  1677  rclone gendocs
  1678  
  1679  Output markdown docs for rclone to the directory supplied.
  1680  
  1681  Synopsis
  1682  
  1683  This produces markdown docs for the rclone commands to the directory
  1684  supplied. These are in a format suitable for hugo to render into the
  1685  rclone.org website.
  1686  
  1687      rclone gendocs output_directory [flags]
  1688  
  1689  Options
  1690  
  1691        -h, --help   help for gendocs
  1692  
  1693  SEE ALSO
  1694  
  1695  -   rclone - Show help for rclone commands, flags and backends.
  1696  
  1697  Auto generated by spf13/cobra on 15-Jun-2019
  1698  
  1699  
  1700  rclone hashsum
  1701  
  1702  Produces an hashsum file for all the objects in the path.
  1703  
  1704  Synopsis
  1705  
  1706  Produces a hash file for all the objects in the path using the hash
  1707  named. The output is in the same format as the standard md5sum/sha1sum
  1708  tool.
  1709  
  1710  Run without a hash to see the list of supported hashes, eg
  1711  
  1712      $ rclone hashsum
  1713      Supported hashes are:
  1714        * MD5
  1715        * SHA-1
  1716        * DropboxHash
  1717        * QuickXorHash
  1718  
  1719  Then
  1720  
  1721      $ rclone hashsum MD5 remote:path
  1722  
  1723      rclone hashsum <hash> remote:path [flags]
  1724  
  1725  Options
  1726  
  1727        -h, --help   help for hashsum
  1728  
  1729  SEE ALSO
  1730  
  1731  -   rclone - Show help for rclone commands, flags and backends.
  1732  
  1733  Auto generated by spf13/cobra on 15-Jun-2019
  1734  
  1735  
  1736  rclone link
  1737  
  1738  Generate public link to file/folder.
  1739  
  1740  Synopsis
  1741  
  1742  rclone link will create or retrieve a public link to the given file or
  1743  folder.
  1744  
  1745      rclone link remote:path/to/file
  1746      rclone link remote:path/to/folder/
  1747  
  1748  If successful, the last line of the output will contain the link. Exact
  1749  capabilities depend on the remote, but the link will always be created
  1750  with the least constraints – e.g. no expiry, no password protection,
  1751  accessible without account.
  1752  
  1753      rclone link remote:path [flags]
  1754  
  1755  Options
  1756  
  1757        -h, --help   help for link
  1758  
  1759  SEE ALSO
  1760  
  1761  -   rclone - Show help for rclone commands, flags and backends.
  1762  
  1763  Auto generated by spf13/cobra on 15-Jun-2019
  1764  
  1765  
  1766  rclone listremotes
  1767  
  1768  List all the remotes in the config file.
  1769  
  1770  Synopsis
  1771  
  1772  rclone listremotes lists all the available remotes from the config file.
  1773  
  1774  When uses with the -l flag it lists the types too.
  1775  
  1776      rclone listremotes [flags]
  1777  
  1778  Options
  1779  
  1780        -h, --help   help for listremotes
  1781            --long   Show the type as well as names.
  1782  
  1783  SEE ALSO
  1784  
  1785  -   rclone - Show help for rclone commands, flags and backends.
  1786  
  1787  Auto generated by spf13/cobra on 15-Jun-2019
  1788  
  1789  
  1790  rclone lsf
  1791  
  1792  List directories and objects in remote:path formatted for parsing
  1793  
  1794  Synopsis
  1795  
  1796  List the contents of the source path (directories and objects) to
  1797  standard output in a form which is easy to parse by scripts. By default
  1798  this will just be the names of the objects and directories, one per
  1799  line. The directories will have a / suffix.
  1800  
  1801  Eg
  1802  
  1803      $ rclone lsf swift:bucket
  1804      bevajer5jef
  1805      canole
  1806      diwogej7
  1807      ferejej3gux/
  1808      fubuwic
  1809  
  1810  Use the –format option to control what gets listed. By default this is
  1811  just the path, but you can use these parameters to control the output:
  1812  
  1813      p - path
  1814      s - size
  1815      t - modification time
  1816      h - hash
  1817      i - ID of object
  1818      o - Original ID of underlying object
  1819      m - MimeType of object if known
  1820      e - encrypted name
  1821      T - tier of storage if known, eg "Hot" or "Cool"
  1822  
  1823  So if you wanted the path, size and modification time, you would use
  1824  –format “pst”, or maybe –format “tsp” to put the path last.
  1825  
  1826  Eg
  1827  
  1828      $ rclone lsf  --format "tsp" swift:bucket
  1829      2016-06-25 18:55:41;60295;bevajer5jef
  1830      2016-06-25 18:55:43;90613;canole
  1831      2016-06-25 18:55:43;94467;diwogej7
  1832      2018-04-26 08:50:45;0;ferejej3gux/
  1833      2016-06-25 18:55:40;37600;fubuwic
  1834  
  1835  If you specify “h” in the format you will get the MD5 hash by default,
  1836  use the “–hash” flag to change which hash you want. Note that this can
  1837  be returned as an empty string if it isn’t available on the object (and
  1838  for directories), “ERROR” if there was an error reading it from the
  1839  object and “UNSUPPORTED” if that object does not support that hash type.
  1840  
  1841  For example to emulate the md5sum command you can use
  1842  
  1843      rclone lsf -R --hash MD5 --format hp --separator "  " --files-only .
  1844  
  1845  Eg
  1846  
  1847      $ rclone lsf -R --hash MD5 --format hp --separator "  " --files-only swift:bucket 
  1848      7908e352297f0f530b84a756f188baa3  bevajer5jef
  1849      cd65ac234e6fea5925974a51cdd865cc  canole
  1850      03b5341b4f234b9d984d03ad076bae91  diwogej7
  1851      8fd37c3810dd660778137ac3a66cc06d  fubuwic
  1852      99713e14a4c4ff553acaf1930fad985b  gixacuh7ku
  1853  
  1854  (Though “rclone md5sum .” is an easier way of typing this.)
  1855  
  1856  By default the separator is “;” this can be changed with the –separator
  1857  flag. Note that separators aren’t escaped in the path so putting it last
  1858  is a good strategy.
  1859  
  1860  Eg
  1861  
  1862      $ rclone lsf  --separator "," --format "tshp" swift:bucket
  1863      2016-06-25 18:55:41,60295,7908e352297f0f530b84a756f188baa3,bevajer5jef
  1864      2016-06-25 18:55:43,90613,cd65ac234e6fea5925974a51cdd865cc,canole
  1865      2016-06-25 18:55:43,94467,03b5341b4f234b9d984d03ad076bae91,diwogej7
  1866      2018-04-26 08:52:53,0,,ferejej3gux/
  1867      2016-06-25 18:55:40,37600,8fd37c3810dd660778137ac3a66cc06d,fubuwic
  1868  
  1869  You can output in CSV standard format. This will escape things in " if
  1870  they contain ,
  1871  
  1872  Eg
  1873  
  1874      $ rclone lsf --csv --files-only --format ps remote:path
  1875      test.log,22355
  1876      test.sh,449
  1877      "this file contains a comma, in the file name.txt",6
  1878  
  1879  Note that the –absolute parameter is useful for making lists of files to
  1880  pass to an rclone copy with the –files-from flag.
  1881  
  1882  For example to find all the files modified within one day and copy those
  1883  only (without traversing the whole directory structure):
  1884  
  1885      rclone lsf --absolute --files-only --max-age 1d /path/to/local > new_files
  1886      rclone copy --files-from new_files /path/to/local remote:path
  1887  
  1888  Any of the filtering options can be applied to this command.
  1889  
  1890  There are several related list commands
  1891  
  1892  -   ls to list size and path of objects only
  1893  -   lsl to list modification time, size and path of objects only
  1894  -   lsd to list directories only
  1895  -   lsf to list objects and directories in easy to parse format
  1896  -   lsjson to list objects and directories in JSON format
  1897  
  1898  ls,lsl,lsd are designed to be human readable. lsf is designed to be
  1899  human and machine readable. lsjson is designed to be machine readable.
  1900  
  1901  Note that ls and lsl recurse by default - use “–max-depth 1” to stop the
  1902  recursion.
  1903  
  1904  The other list commands lsd,lsf,lsjson do not recurse by default - use
  1905  “-R” to make them recurse.
  1906  
  1907  Listing a non existent directory will produce an error except for
  1908  remotes which can’t have empty directories (eg s3, swift, gcs, etc - the
  1909  bucket based remotes).
  1910  
  1911      rclone lsf remote:path [flags]
  1912  
  1913  Options
  1914  
  1915            --absolute           Put a leading / in front of path names.
  1916            --csv                Output in CSV format.
  1917        -d, --dir-slash          Append a slash to directory names. (default true)
  1918            --dirs-only          Only list directories.
  1919            --files-only         Only list files.
  1920        -F, --format string      Output format - see  help for details (default "p")
  1921            --hash h             Use this hash when h is used in the format MD5|SHA-1|DropboxHash (default "MD5")
  1922        -h, --help               help for lsf
  1923        -R, --recursive          Recurse into the listing.
  1924        -s, --separator string   Separator for the items in the format. (default ";")
  1925  
  1926  SEE ALSO
  1927  
  1928  -   rclone - Show help for rclone commands, flags and backends.
  1929  
  1930  Auto generated by spf13/cobra on 15-Jun-2019
  1931  
  1932  
  1933  rclone lsjson
  1934  
  1935  List directories and objects in the path in JSON format.
  1936  
  1937  Synopsis
  1938  
  1939  List directories and objects in the path in JSON format.
  1940  
  1941  The output is an array of Items, where each Item looks like this
  1942  
  1943  { “Hashes” : { “SHA-1” : “f572d396fae9206628714fb2ce00f72e94f2258f”,
  1944  “MD5” : “b1946ac92492d2347c6235b4d2611184”, “DropboxHash” :
  1945  “ecb65bb98f9d905b70458986c39fcbad7715e5f2fcc3b1f07767d7c83e2438cc” },
  1946  “ID”: “y2djkhiujf83u33”, “OrigID”: “UYOJVTUW00Q1RzTDA”, “IsBucket” :
  1947  false, “IsDir” : false, “MimeType” : “application/octet-stream”,
  1948  “ModTime” : “2017-05-31T16:15:57.034468261+01:00”, “Name” : “file.txt”,
  1949  “Encrypted” : “v0qpsdq8anpci8n929v3uu9338”, “EncryptedPath” :
  1950  “kja9098349023498/v0qpsdq8anpci8n929v3uu9338”, “Path” :
  1951  “full/path/goes/here/file.txt”, “Size” : 6, “Tier” : “hot”, }
  1952  
  1953  If –hash is not specified the Hashes property won’t be emitted.
  1954  
  1955  If –no-modtime is specified then ModTime will be blank.
  1956  
  1957  If –encrypted is not specified the Encrypted won’t be emitted.
  1958  
  1959  If –dirs-only is not specified files in addition to directories are
  1960  returned
  1961  
  1962  If –files-only is not specified directories in addition to the files
  1963  will be returned.
  1964  
  1965  The Path field will only show folders below the remote path being
  1966  listed. If “remote:path” contains the file “subfolder/file.txt”, the
  1967  Path for “file.txt” will be “subfolder/file.txt”, not
  1968  “remote:path/subfolder/file.txt”. When used without –recursive the Path
  1969  will always be the same as Name.
  1970  
  1971  If the directory is a bucket in a bucket based backend, then “IsBucket”
  1972  will be set to true. This key won’t be present unless it is “true”.
  1973  
  1974  The time is in RFC3339 format with up to nanosecond precision. The
  1975  number of decimal digits in the seconds will depend on the precision
  1976  that the remote can hold the times, so if times are accurate to the
  1977  nearest millisecond (eg Google Drive) then 3 digits will always be shown
  1978  (“2017-05-31T16:15:57.034+01:00”) whereas if the times are accurate to
  1979  the nearest second (Dropbox, Box, WebDav etc) no digits will be shown
  1980  (“2017-05-31T16:15:57+01:00”).
  1981  
  1982  The whole output can be processed as a JSON blob, or alternatively it
  1983  can be processed line by line as each item is written one to a line.
  1984  
  1985  Any of the filtering options can be applied to this command.
  1986  
  1987  There are several related list commands
  1988  
  1989  -   ls to list size and path of objects only
  1990  -   lsl to list modification time, size and path of objects only
  1991  -   lsd to list directories only
  1992  -   lsf to list objects and directories in easy to parse format
  1993  -   lsjson to list objects and directories in JSON format
  1994  
  1995  ls,lsl,lsd are designed to be human readable. lsf is designed to be
  1996  human and machine readable. lsjson is designed to be machine readable.
  1997  
  1998  Note that ls and lsl recurse by default - use “–max-depth 1” to stop the
  1999  recursion.
  2000  
  2001  The other list commands lsd,lsf,lsjson do not recurse by default - use
  2002  “-R” to make them recurse.
  2003  
  2004  Listing a non existent directory will produce an error except for
  2005  remotes which can’t have empty directories (eg s3, swift, gcs, etc - the
  2006  bucket based remotes).
  2007  
  2008      rclone lsjson remote:path [flags]
  2009  
  2010  Options
  2011  
  2012            --dirs-only    Show only directories in the listing.
  2013        -M, --encrypted    Show the encrypted names.
  2014            --files-only   Show only files in the listing.
  2015            --hash         Include hashes in the output (may take longer).
  2016        -h, --help         help for lsjson
  2017            --no-modtime   Don't read the modification time (can speed things up).
  2018            --original     Show the ID of the underlying Object.
  2019        -R, --recursive    Recurse into the listing.
  2020  
  2021  SEE ALSO
  2022  
  2023  -   rclone - Show help for rclone commands, flags and backends.
  2024  
  2025  Auto generated by spf13/cobra on 15-Jun-2019
  2026  
  2027  
  2028  rclone mount
  2029  
  2030  Mount the remote as file system on a mountpoint.
  2031  
  2032  Synopsis
  2033  
  2034  rclone mount allows Linux, FreeBSD, macOS and Windows to mount any of
  2035  Rclone’s cloud storage systems as a file system with FUSE.
  2036  
  2037  First set up your remote using rclone config. Check it works with
  2038  rclone ls etc.
  2039  
  2040  Start the mount like this
  2041  
  2042      rclone mount remote:path/to/files /path/to/local/mount
  2043  
  2044  Or on Windows like this where X: is an unused drive letter
  2045  
  2046      rclone mount remote:path/to/files X:
  2047  
  2048  When the program ends, either via Ctrl+C or receiving a SIGINT or
  2049  SIGTERM signal, the mount is automatically stopped.
  2050  
  2051  The umount operation can fail, for example when the mountpoint is busy.
  2052  When that happens, it is the user’s responsibility to stop the mount
  2053  manually with
  2054  
  2055      # Linux
  2056      fusermount -u /path/to/local/mount
  2057      # OS X
  2058      umount /path/to/local/mount
  2059  
  2060  Installing on Windows
  2061  
  2062  To run rclone mount on Windows, you will need to download and install
  2063  WinFsp.
  2064  
  2065  WinFsp is an open source Windows File System Proxy which makes it easy
  2066  to write user space file systems for Windows. It provides a FUSE
  2067  emulation layer which rclone uses combination with cgofuse. Both of
  2068  these packages are by Bill Zissimopoulos who was very helpful during the
  2069  implementation of rclone mount for Windows.
  2070  
  2071  Windows caveats
  2072  
  2073  Note that drives created as Administrator are not visible by other
  2074  accounts (including the account that was elevated as Administrator). So
  2075  if you start a Windows drive from an Administrative Command Prompt and
  2076  then try to access the same drive from Explorer (which does not run as
  2077  Administrator), you will not be able to see the new drive.
  2078  
  2079  The easiest way around this is to start the drive from a normal command
  2080  prompt. It is also possible to start a drive from the SYSTEM account
  2081  (using the WinFsp.Launcher infrastructure) which creates drives
  2082  accessible for everyone on the system or alternatively using the nssm
  2083  service manager.
  2084  
  2085  Limitations
  2086  
  2087  Without the use of “–vfs-cache-mode” this can only write files
  2088  sequentially, it can only seek when reading. This means that many
  2089  applications won’t work with their files on an rclone mount without
  2090  “–vfs-cache-mode writes” or “–vfs-cache-mode full”. See the File Caching
  2091  section for more info.
  2092  
  2093  The bucket based remotes (eg Swift, S3, Google Compute Storage, B2,
  2094  Hubic) won’t work from the root - you will need to specify a bucket, or
  2095  a path within the bucket. So swift: won’t work whereas swift:bucket will
  2096  as will swift:bucket/path. None of these support the concept of
  2097  directories, so empty directories will have a tendency to disappear once
  2098  they fall out of the directory cache.
  2099  
  2100  Only supported on Linux, FreeBSD, OS X and Windows at the moment.
  2101  
  2102  rclone mount vs rclone sync/copy
  2103  
  2104  File systems expect things to be 100% reliable, whereas cloud storage
  2105  systems are a long way from 100% reliable. The rclone sync/copy commands
  2106  cope with this with lots of retries. However rclone mount can’t use
  2107  retries in the same way without making local copies of the uploads. Look
  2108  at the file caching for solutions to make mount more reliable.
  2109  
  2110  Attribute caching
  2111  
  2112  You can use the flag –attr-timeout to set the time the kernel caches the
  2113  attributes (size, modification time etc) for directory entries.
  2114  
  2115  The default is “1s” which caches files just long enough to avoid too
  2116  many callbacks to rclone from the kernel.
  2117  
  2118  In theory 0s should be the correct value for filesystems which can
  2119  change outside the control of the kernel. However this causes quite a
  2120  few problems such as rclone using too much memory, rclone not serving
  2121  files to samba and excessive time listing directories.
  2122  
  2123  The kernel can cache the info about a file for the time given by
  2124  “–attr-timeout”. You may see corruption if the remote file changes
  2125  length during this window. It will show up as either a truncated file or
  2126  a file with garbage on the end. With “–attr-timeout 1s” this is very
  2127  unlikely but not impossible. The higher you set “–attr-timeout” the more
  2128  likely it is. The default setting of “1s” is the lowest setting which
  2129  mitigates the problems above.
  2130  
  2131  If you set it higher (‘10s’ or ‘1m’ say) then the kernel will call back
  2132  to rclone less often making it more efficient, however there is more
  2133  chance of the corruption issue above.
  2134  
  2135  If files don’t change on the remote outside of the control of rclone
  2136  then there is no chance of corruption.
  2137  
  2138  This is the same as setting the attr_timeout option in mount.fuse.
  2139  
  2140  Filters
  2141  
  2142  Note that all the rclone filters can be used to select a subset of the
  2143  files to be visible in the mount.
  2144  
  2145  systemd
  2146  
  2147  When running rclone mount as a systemd service, it is possible to use
  2148  Type=notify. In this case the service will enter the started state after
  2149  the mountpoint has been successfully set up. Units having the rclone
  2150  mount service specified as a requirement will see all files and folders
  2151  immediately in this mode.
  2152  
  2153  chunked reading
  2154  
  2155  –vfs-read-chunk-size will enable reading the source objects in parts.
  2156  This can reduce the used download quota for some remotes by requesting
  2157  only chunks from the remote that are actually read at the cost of an
  2158  increased number of requests.
  2159  
  2160  When –vfs-read-chunk-size-limit is also specified and greater than
  2161  –vfs-read-chunk-size, the chunk size for each open file will get doubled
  2162  for each chunk read, until the specified value is reached. A value of -1
  2163  will disable the limit and the chunk size will grow indefinitely.
  2164  
  2165  With –vfs-read-chunk-size 100M and –vfs-read-chunk-size-limit 0 the
  2166  following parts will be downloaded: 0-100M, 100M-200M, 200M-300M,
  2167  300M-400M and so on. When –vfs-read-chunk-size-limit 500M is specified,
  2168  the result would be 0-100M, 100M-300M, 300M-700M, 700M-1200M,
  2169  1200M-1700M and so on.
  2170  
  2171  Chunked reading will only work with –vfs-cache-mode < full, as the file
  2172  will always be copied to the vfs cache before opening with
  2173  –vfs-cache-mode full.
  2174  
  2175  Directory Cache
  2176  
  2177  Using the --dir-cache-time flag, you can set how long a directory should
  2178  be considered up to date and not refreshed from the backend. Changes
  2179  made locally in the mount may appear immediately or invalidate the
  2180  cache. However, changes done on the remote will only be picked up once
  2181  the cache expires.
  2182  
  2183  Alternatively, you can send a SIGHUP signal to rclone for it to flush
  2184  all directory caches, regardless of how old they are. Assuming only one
  2185  rclone instance is running, you can reset the cache like this:
  2186  
  2187      kill -SIGHUP $(pidof rclone)
  2188  
  2189  If you configure rclone with a remote control then you can use rclone rc
  2190  to flush the whole directory cache:
  2191  
  2192      rclone rc vfs/forget
  2193  
  2194  Or individual files or directories:
  2195  
  2196      rclone rc vfs/forget file=path/to/file dir=path/to/dir
  2197  
  2198  File Buffering
  2199  
  2200  The --buffer-size flag determines the amount of memory, that will be
  2201  used to buffer data in advance.
  2202  
  2203  Each open file descriptor will try to keep the specified amount of data
  2204  in memory at all times. The buffered data is bound to one file
  2205  descriptor and won’t be shared between multiple open file descriptors of
  2206  the same file.
  2207  
  2208  This flag is a upper limit for the used memory per file descriptor. The
  2209  buffer will only use memory for data that is downloaded but not not yet
  2210  read. If the buffer is empty, only a small amount of memory will be
  2211  used. The maximum memory used by rclone for buffering can be up to
  2212  --buffer-size * open files.
  2213  
  2214  File Caching
  2215  
  2216  These flags control the VFS file caching options. The VFS layer is used
  2217  by rclone mount to make a cloud storage system work more like a normal
  2218  file system.
  2219  
  2220  You’ll need to enable VFS caching if you want, for example, to read and
  2221  write simultaneously to a file. See below for more details.
  2222  
  2223  Note that the VFS cache works in addition to the cache backend and you
  2224  may find that you need one or the other or both.
  2225  
  2226      --cache-dir string                   Directory rclone will use for caching.
  2227      --vfs-cache-max-age duration         Max age of objects in the cache. (default 1h0m0s)
  2228      --vfs-cache-mode string              Cache mode off|minimal|writes|full (default "off")
  2229      --vfs-cache-poll-interval duration   Interval to poll the cache for stale objects. (default 1m0s)
  2230      --vfs-cache-max-size int             Max total size of objects in the cache. (default off)
  2231  
  2232  If run with -vv rclone will print the location of the file cache. The
  2233  files are stored in the user cache file area which is OS dependent but
  2234  can be controlled with --cache-dir or setting the appropriate
  2235  environment variable.
  2236  
  2237  The cache has 4 different modes selected by --vfs-cache-mode. The higher
  2238  the cache mode the more compatible rclone becomes at the cost of using
  2239  disk space.
  2240  
  2241  Note that files are written back to the remote only when they are closed
  2242  so if rclone is quit or dies with open files then these won’t get
  2243  written back to the remote. However they will still be in the on disk
  2244  cache.
  2245  
  2246  If using –vfs-cache-max-size note that the cache may exceed this size
  2247  for two reasons. Firstly because it is only checked every
  2248  –vfs-cache-poll-interval. Secondly because open files cannot be evicted
  2249  from the cache.
  2250  
  2251  –vfs-cache-mode off
  2252  
  2253  In this mode the cache will read directly from the remote and write
  2254  directly to the remote without caching anything on disk.
  2255  
  2256  This will mean some operations are not possible
  2257  
  2258  -   Files can’t be opened for both read AND write
  2259  -   Files opened for write can’t be seeked
  2260  -   Existing files opened for write must have O_TRUNC set
  2261  -   Files open for read with O_TRUNC will be opened write only
  2262  -   Files open for write only will behave as if O_TRUNC was supplied
  2263  -   Open modes O_APPEND, O_TRUNC are ignored
  2264  -   If an upload fails it can’t be retried
  2265  
  2266  –vfs-cache-mode minimal
  2267  
  2268  This is very similar to “off” except that files opened for read AND
  2269  write will be buffered to disks. This means that files opened for write
  2270  will be a lot more compatible, but uses the minimal disk space.
  2271  
  2272  These operations are not possible
  2273  
  2274  -   Files opened for write only can’t be seeked
  2275  -   Existing files opened for write must have O_TRUNC set
  2276  -   Files opened for write only will ignore O_APPEND, O_TRUNC
  2277  -   If an upload fails it can’t be retried
  2278  
  2279  –vfs-cache-mode writes
  2280  
  2281  In this mode files opened for read only are still read directly from the
  2282  remote, write only and read/write files are buffered to disk first.
  2283  
  2284  This mode should support all normal file system operations.
  2285  
  2286  If an upload fails it will be retried up to –low-level-retries times.
  2287  
  2288  –vfs-cache-mode full
  2289  
  2290  In this mode all reads and writes are buffered to and from disk. When a
  2291  file is opened for read it will be downloaded in its entirety first.
  2292  
  2293  This may be appropriate for your needs, or you may prefer to look at the
  2294  cache backend which does a much more sophisticated job of caching,
  2295  including caching directory hierarchies and chunks of files.
  2296  
  2297  In this mode, unlike the others, when a file is written to the disk, it
  2298  will be kept on the disk after it is written to the remote. It will be
  2299  purged on a schedule according to --vfs-cache-max-age.
  2300  
  2301  This mode should support all normal file system operations.
  2302  
  2303  If an upload or download fails it will be retried up to
  2304  –low-level-retries times.
  2305  
  2306      rclone mount remote:path /path/to/mountpoint [flags]
  2307  
  2308  Options
  2309  
  2310            --allow-non-empty                        Allow mounting over a non-empty directory.
  2311            --allow-other                            Allow access to other users.
  2312            --allow-root                             Allow access to root user.
  2313            --attr-timeout duration                  Time for which file/directory attributes are cached. (default 1s)
  2314            --daemon                                 Run mount as a daemon (background mode).
  2315            --daemon-timeout duration                Time limit for rclone to respond to kernel (not supported by all OSes).
  2316            --debug-fuse                             Debug the FUSE internals - needs -v.
  2317            --default-permissions                    Makes kernel enforce access control based on the file mode.
  2318            --dir-cache-time duration                Time to cache directory entries for. (default 5m0s)
  2319            --dir-perms FileMode                     Directory permissions (default 0777)
  2320            --file-perms FileMode                    File permissions (default 0666)
  2321            --fuse-flag stringArray                  Flags or arguments to be passed direct to libfuse/WinFsp. Repeat if required.
  2322            --gid uint32                             Override the gid field set by the filesystem. (default 1000)
  2323        -h, --help                                   help for mount
  2324            --max-read-ahead SizeSuffix              The number of bytes that can be prefetched for sequential reads. (default 128k)
  2325            --no-checksum                            Don't compare checksums on up/download.
  2326            --no-modtime                             Don't read/write the modification time (can speed things up).
  2327            --no-seek                                Don't allow seeking in files.
  2328        -o, --option stringArray                     Option for libfuse/WinFsp. Repeat if required.
  2329            --poll-interval duration                 Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
  2330            --read-only                              Mount read-only.
  2331            --uid uint32                             Override the uid field set by the filesystem. (default 1000)
  2332            --umask int                              Override the permission bits set by the filesystem.
  2333            --vfs-cache-max-age duration             Max age of objects in the cache. (default 1h0m0s)
  2334            --vfs-cache-max-size SizeSuffix          Max total size of objects in the cache. (default off)
  2335            --vfs-cache-mode CacheMode               Cache mode off|minimal|writes|full (default off)
  2336            --vfs-cache-poll-interval duration       Interval to poll the cache for stale objects. (default 1m0s)
  2337            --vfs-read-chunk-size SizeSuffix         Read the source objects in chunks. (default 128M)
  2338            --vfs-read-chunk-size-limit SizeSuffix   If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
  2339            --volname string                         Set the volume name (not supported by all OSes).
  2340            --write-back-cache                       Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used.
  2341  
  2342  SEE ALSO
  2343  
  2344  -   rclone - Show help for rclone commands, flags and backends.
  2345  
  2346  Auto generated by spf13/cobra on 15-Jun-2019
  2347  
  2348  
  2349  rclone moveto
  2350  
  2351  Move file or directory from source to dest.
  2352  
  2353  Synopsis
  2354  
  2355  If source:path is a file or directory then it moves it to a file or
  2356  directory named dest:path.
  2357  
  2358  This can be used to rename files or upload single files to other than
  2359  their existing name. If the source is a directory then it acts exactly
  2360  like the move command.
  2361  
  2362  So
  2363  
  2364      rclone moveto src dst
  2365  
  2366  where src and dst are rclone paths, either remote:path or /path/to/local
  2367  or C:.
  2368  
  2369  This will:
  2370  
  2371      if src is file
  2372          move it to dst, overwriting an existing file if it exists
  2373      if src is directory
  2374          move it to dst, overwriting existing files if they exist
  2375          see move command for full details
  2376  
  2377  This doesn’t transfer unchanged files, testing by size and modification
  2378  time or MD5SUM. src will be deleted on successful transfer.
  2379  
  2380  IMPORTANT: Since this can cause data loss, test first with the –dry-run
  2381  flag.
  2382  
  2383  NOTE: Use the -P/--progress flag to view real-time transfer statistics.
  2384  
  2385      rclone moveto source:path dest:path [flags]
  2386  
  2387  Options
  2388  
  2389        -h, --help   help for moveto
  2390  
  2391  SEE ALSO
  2392  
  2393  -   rclone - Show help for rclone commands, flags and backends.
  2394  
  2395  Auto generated by spf13/cobra on 15-Jun-2019
  2396  
  2397  
  2398  rclone ncdu
  2399  
  2400  Explore a remote with a text based user interface.
  2401  
  2402  Synopsis
  2403  
  2404  This displays a text based user interface allowing the navigation of a
  2405  remote. It is most useful for answering the question - “What is using
  2406  all my disk space?”.
  2407  
  2408  To make the user interface it first scans the entire remote given and
  2409  builds an in memory representation. rclone ncdu can be used during this
  2410  scanning phase and you will see it building up the directory structure
  2411  as it goes along.
  2412  
  2413  Here are the keys - press ‘?’ to toggle the help on and off
  2414  
  2415       ↑,↓ or k,j to Move
  2416       →,l to enter
  2417       ←,h to return
  2418       c toggle counts
  2419       g toggle graph
  2420       n,s,C sort by name,size,count
  2421       d delete file/directory
  2422       ^L refresh screen
  2423       ? to toggle help on and off
  2424       q/ESC/c-C to quit
  2425  
  2426  This an homage to the ncdu tool but for rclone remotes. It is missing
  2427  lots of features at the moment but is useful as it stands.
  2428  
  2429  Note that it might take some time to delete big files/folders. The UI
  2430  won’t respond in the meantime since the deletion is done synchronously.
  2431  
  2432      rclone ncdu remote:path [flags]
  2433  
  2434  Options
  2435  
  2436        -h, --help   help for ncdu
  2437  
  2438  SEE ALSO
  2439  
  2440  -   rclone - Show help for rclone commands, flags and backends.
  2441  
  2442  Auto generated by spf13/cobra on 15-Jun-2019
  2443  
  2444  
  2445  rclone obscure
  2446  
  2447  Obscure password for use in the rclone.conf
  2448  
  2449  Synopsis
  2450  
  2451  Obscure password for use in the rclone.conf
  2452  
  2453      rclone obscure password [flags]
  2454  
  2455  Options
  2456  
  2457        -h, --help   help for obscure
  2458  
  2459  SEE ALSO
  2460  
  2461  -   rclone - Show help for rclone commands, flags and backends.
  2462  
  2463  Auto generated by spf13/cobra on 15-Jun-2019
  2464  
  2465  
  2466  rclone rc
  2467  
  2468  Run a command against a running rclone.
  2469  
  2470  Synopsis
  2471  
  2472  This runs a command against a running rclone. Use the –url flag to
  2473  specify an non default URL to connect on. This can be either a “:port”
  2474  which is taken to mean “http://localhost:port” or a “host:port” which is
  2475  taken to mean “http://host:port”
  2476  
  2477  A username and password can be passed in with –user and –pass.
  2478  
  2479  Note that –rc-addr, –rc-user, –rc-pass will be read also for –url,
  2480  –user, –pass.
  2481  
  2482  Arguments should be passed in as parameter=value.
  2483  
  2484  The result will be returned as a JSON object by default.
  2485  
  2486  The –json parameter can be used to pass in a JSON blob as an input
  2487  instead of key=value arguments. This is the only way of passing in more
  2488  complicated values.
  2489  
  2490  Use –loopback to connect to the rclone instance running “rclone rc”.
  2491  This is very useful for testing commands without having to run an rclone
  2492  rc server, eg:
  2493  
  2494      rclone rc --loopback operations/about fs=/
  2495  
  2496  Use “rclone rc” to see a list of all possible commands.
  2497  
  2498      rclone rc commands parameter [flags]
  2499  
  2500  Options
  2501  
  2502        -h, --help          help for rc
  2503            --json string   Input JSON - use instead of key=value args.
  2504            --loopback      If set connect to this rclone instance not via HTTP.
  2505            --no-output     If set don't output the JSON result.
  2506            --pass string   Password to use to connect to rclone remote control.
  2507            --url string    URL to connect to rclone remote control. (default "http://localhost:5572/")
  2508            --user string   Username to use to rclone remote control.
  2509  
  2510  SEE ALSO
  2511  
  2512  -   rclone - Show help for rclone commands, flags and backends.
  2513  
  2514  Auto generated by spf13/cobra on 15-Jun-2019
  2515  
  2516  
  2517  rclone rcat
  2518  
  2519  Copies standard input to file on remote.
  2520  
  2521  Synopsis
  2522  
  2523  rclone rcat reads from standard input (stdin) and copies it to a single
  2524  remote file.
  2525  
  2526      echo "hello world" | rclone rcat remote:path/to/file
  2527      ffmpeg - | rclone rcat remote:path/to/file
  2528  
  2529  If the remote file already exists, it will be overwritten.
  2530  
  2531  rcat will try to upload small files in a single request, which is
  2532  usually more efficient than the streaming/chunked upload endpoints,
  2533  which use multiple requests. Exact behaviour depends on the remote. What
  2534  is considered a small file may be set through --streaming-upload-cutoff.
  2535  Uploading only starts after the cutoff is reached or if the file ends
  2536  before that. The data must fit into RAM. The cutoff needs to be small
  2537  enough to adhere the limits of your remote, please see there. Generally
  2538  speaking, setting this cutoff too high will decrease your performance.
  2539  
  2540  Note that the upload can also not be retried because the data is not
  2541  kept around until the upload succeeds. If you need to transfer a lot of
  2542  data, you’re better off caching locally and then rclone move it to the
  2543  destination.
  2544  
  2545      rclone rcat remote:path [flags]
  2546  
  2547  Options
  2548  
  2549        -h, --help   help for rcat
  2550  
  2551  SEE ALSO
  2552  
  2553  -   rclone - Show help for rclone commands, flags and backends.
  2554  
  2555  Auto generated by spf13/cobra on 15-Jun-2019
  2556  
  2557  
  2558  rclone rcd
  2559  
  2560  Run rclone listening to remote control commands only.
  2561  
  2562  Synopsis
  2563  
  2564  This runs rclone so that it only listens to remote control commands.
  2565  
  2566  This is useful if you are controlling rclone via the rc API.
  2567  
  2568  If you pass in a path to a directory, rclone will serve that directory
  2569  for GET requests on the URL passed in. It will also open the URL in the
  2570  browser when rclone is run.
  2571  
  2572  See the rc documentation for more info on the rc flags.
  2573  
  2574      rclone rcd <path to files to serve>* [flags]
  2575  
  2576  Options
  2577  
  2578        -h, --help   help for rcd
  2579  
  2580  SEE ALSO
  2581  
  2582  -   rclone - Show help for rclone commands, flags and backends.
  2583  
  2584  Auto generated by spf13/cobra on 15-Jun-2019
  2585  
  2586  
  2587  rclone rmdirs
  2588  
  2589  Remove empty directories under the path.
  2590  
  2591  Synopsis
  2592  
  2593  This removes any empty directories (or directories that only contain
  2594  empty directories) under the path that it finds, including the path if
  2595  it has nothing in.
  2596  
  2597  If you supply the –leave-root flag, it will not remove the root
  2598  directory.
  2599  
  2600  This is useful for tidying up remotes that rclone has left a lot of
  2601  empty directories in.
  2602  
  2603      rclone rmdirs remote:path [flags]
  2604  
  2605  Options
  2606  
  2607        -h, --help         help for rmdirs
  2608            --leave-root   Do not remove root directory if empty
  2609  
  2610  SEE ALSO
  2611  
  2612  -   rclone - Show help for rclone commands, flags and backends.
  2613  
  2614  Auto generated by spf13/cobra on 15-Jun-2019
  2615  
  2616  
  2617  rclone serve
  2618  
  2619  Serve a remote over a protocol.
  2620  
  2621  Synopsis
  2622  
  2623  rclone serve is used to serve a remote over a given protocol. This
  2624  command requires the use of a subcommand to specify the protocol, eg
  2625  
  2626      rclone serve http remote:
  2627  
  2628  Each subcommand has its own options which you can see in their help.
  2629  
  2630      rclone serve <protocol> [opts] <remote> [flags]
  2631  
  2632  Options
  2633  
  2634        -h, --help   help for serve
  2635  
  2636  SEE ALSO
  2637  
  2638  -   rclone - Show help for rclone commands, flags and backends.
  2639  -   rclone serve dlna - Serve remote:path over DLNA
  2640  -   rclone serve ftp - Serve remote:path over FTP.
  2641  -   rclone serve http - Serve the remote over HTTP.
  2642  -   rclone serve restic - Serve the remote for restic’s REST API.
  2643  -   rclone serve sftp - Serve the remote over SFTP.
  2644  -   rclone serve webdav - Serve remote:path over webdav.
  2645  
  2646  Auto generated by spf13/cobra on 15-Jun-2019
  2647  
  2648  
  2649  rclone serve dlna
  2650  
  2651  Serve remote:path over DLNA
  2652  
  2653  Synopsis
  2654  
  2655  rclone serve dlna is a DLNA media server for media stored in a rclone
  2656  remote. Many devices, such as the Xbox and PlayStation, can
  2657  automatically discover this server in the LAN and play audio/video from
  2658  it. VLC is also supported. Service discovery uses UDP multicast packets
  2659  (SSDP) and will thus only work on LANs.
  2660  
  2661  Rclone will list all files present in the remote, without filtering
  2662  based on media formats or file extensions. Additionally, there is no
  2663  media transcoding support. This means that some players might show files
  2664  that they are not able to play back correctly.
  2665  
  2666  Server options
  2667  
  2668  Use –addr to specify which IP address and port the server should listen
  2669  on, eg –addr 1.2.3.4:8000 or –addr :8080 to listen to all IPs.
  2670  
  2671  Use –name to choose the friendly server name, which is by default
  2672  “rclone (hostname)”.
  2673  
  2674  Use –log-trace in conjunction with -vv to enable additional debug
  2675  logging of all UPNP traffic.
  2676  
  2677  Directory Cache
  2678  
  2679  Using the --dir-cache-time flag, you can set how long a directory should
  2680  be considered up to date and not refreshed from the backend. Changes
  2681  made locally in the mount may appear immediately or invalidate the
  2682  cache. However, changes done on the remote will only be picked up once
  2683  the cache expires.
  2684  
  2685  Alternatively, you can send a SIGHUP signal to rclone for it to flush
  2686  all directory caches, regardless of how old they are. Assuming only one
  2687  rclone instance is running, you can reset the cache like this:
  2688  
  2689      kill -SIGHUP $(pidof rclone)
  2690  
  2691  If you configure rclone with a remote control then you can use rclone rc
  2692  to flush the whole directory cache:
  2693  
  2694      rclone rc vfs/forget
  2695  
  2696  Or individual files or directories:
  2697  
  2698      rclone rc vfs/forget file=path/to/file dir=path/to/dir
  2699  
  2700  File Buffering
  2701  
  2702  The --buffer-size flag determines the amount of memory, that will be
  2703  used to buffer data in advance.
  2704  
  2705  Each open file descriptor will try to keep the specified amount of data
  2706  in memory at all times. The buffered data is bound to one file
  2707  descriptor and won’t be shared between multiple open file descriptors of
  2708  the same file.
  2709  
  2710  This flag is a upper limit for the used memory per file descriptor. The
  2711  buffer will only use memory for data that is downloaded but not not yet
  2712  read. If the buffer is empty, only a small amount of memory will be
  2713  used. The maximum memory used by rclone for buffering can be up to
  2714  --buffer-size * open files.
  2715  
  2716  File Caching
  2717  
  2718  These flags control the VFS file caching options. The VFS layer is used
  2719  by rclone mount to make a cloud storage system work more like a normal
  2720  file system.
  2721  
  2722  You’ll need to enable VFS caching if you want, for example, to read and
  2723  write simultaneously to a file. See below for more details.
  2724  
  2725  Note that the VFS cache works in addition to the cache backend and you
  2726  may find that you need one or the other or both.
  2727  
  2728      --cache-dir string                   Directory rclone will use for caching.
  2729      --vfs-cache-max-age duration         Max age of objects in the cache. (default 1h0m0s)
  2730      --vfs-cache-mode string              Cache mode off|minimal|writes|full (default "off")
  2731      --vfs-cache-poll-interval duration   Interval to poll the cache for stale objects. (default 1m0s)
  2732      --vfs-cache-max-size int             Max total size of objects in the cache. (default off)
  2733  
  2734  If run with -vv rclone will print the location of the file cache. The
  2735  files are stored in the user cache file area which is OS dependent but
  2736  can be controlled with --cache-dir or setting the appropriate
  2737  environment variable.
  2738  
  2739  The cache has 4 different modes selected by --vfs-cache-mode. The higher
  2740  the cache mode the more compatible rclone becomes at the cost of using
  2741  disk space.
  2742  
  2743  Note that files are written back to the remote only when they are closed
  2744  so if rclone is quit or dies with open files then these won’t get
  2745  written back to the remote. However they will still be in the on disk
  2746  cache.
  2747  
  2748  If using –vfs-cache-max-size note that the cache may exceed this size
  2749  for two reasons. Firstly because it is only checked every
  2750  –vfs-cache-poll-interval. Secondly because open files cannot be evicted
  2751  from the cache.
  2752  
  2753  –vfs-cache-mode off
  2754  
  2755  In this mode the cache will read directly from the remote and write
  2756  directly to the remote without caching anything on disk.
  2757  
  2758  This will mean some operations are not possible
  2759  
  2760  -   Files can’t be opened for both read AND write
  2761  -   Files opened for write can’t be seeked
  2762  -   Existing files opened for write must have O_TRUNC set
  2763  -   Files open for read with O_TRUNC will be opened write only
  2764  -   Files open for write only will behave as if O_TRUNC was supplied
  2765  -   Open modes O_APPEND, O_TRUNC are ignored
  2766  -   If an upload fails it can’t be retried
  2767  
  2768  –vfs-cache-mode minimal
  2769  
  2770  This is very similar to “off” except that files opened for read AND
  2771  write will be buffered to disks. This means that files opened for write
  2772  will be a lot more compatible, but uses the minimal disk space.
  2773  
  2774  These operations are not possible
  2775  
  2776  -   Files opened for write only can’t be seeked
  2777  -   Existing files opened for write must have O_TRUNC set
  2778  -   Files opened for write only will ignore O_APPEND, O_TRUNC
  2779  -   If an upload fails it can’t be retried
  2780  
  2781  –vfs-cache-mode writes
  2782  
  2783  In this mode files opened for read only are still read directly from the
  2784  remote, write only and read/write files are buffered to disk first.
  2785  
  2786  This mode should support all normal file system operations.
  2787  
  2788  If an upload fails it will be retried up to –low-level-retries times.
  2789  
  2790  –vfs-cache-mode full
  2791  
  2792  In this mode all reads and writes are buffered to and from disk. When a
  2793  file is opened for read it will be downloaded in its entirety first.
  2794  
  2795  This may be appropriate for your needs, or you may prefer to look at the
  2796  cache backend which does a much more sophisticated job of caching,
  2797  including caching directory hierarchies and chunks of files.
  2798  
  2799  In this mode, unlike the others, when a file is written to the disk, it
  2800  will be kept on the disk after it is written to the remote. It will be
  2801  purged on a schedule according to --vfs-cache-max-age.
  2802  
  2803  This mode should support all normal file system operations.
  2804  
  2805  If an upload or download fails it will be retried up to
  2806  –low-level-retries times.
  2807  
  2808      rclone serve dlna remote:path [flags]
  2809  
  2810  Options
  2811  
  2812            --addr string                            ip:port or :port to bind the DLNA http server to. (default ":7879")
  2813            --dir-cache-time duration                Time to cache directory entries for. (default 5m0s)
  2814            --dir-perms FileMode                     Directory permissions (default 0777)
  2815            --file-perms FileMode                    File permissions (default 0666)
  2816            --gid uint32                             Override the gid field set by the filesystem. (default 1000)
  2817        -h, --help                                   help for dlna
  2818            --log-trace                              enable trace logging of SOAP traffic
  2819            --name string                            name of DLNA server
  2820            --no-checksum                            Don't compare checksums on up/download.
  2821            --no-modtime                             Don't read/write the modification time (can speed things up).
  2822            --no-seek                                Don't allow seeking in files.
  2823            --poll-interval duration                 Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
  2824            --read-only                              Mount read-only.
  2825            --uid uint32                             Override the uid field set by the filesystem. (default 1000)
  2826            --umask int                              Override the permission bits set by the filesystem. (default 2)
  2827            --vfs-cache-max-age duration             Max age of objects in the cache. (default 1h0m0s)
  2828            --vfs-cache-max-size SizeSuffix          Max total size of objects in the cache. (default off)
  2829            --vfs-cache-mode CacheMode               Cache mode off|minimal|writes|full (default off)
  2830            --vfs-cache-poll-interval duration       Interval to poll the cache for stale objects. (default 1m0s)
  2831            --vfs-read-chunk-size SizeSuffix         Read the source objects in chunks. (default 128M)
  2832            --vfs-read-chunk-size-limit SizeSuffix   If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
  2833  
  2834  SEE ALSO
  2835  
  2836  -   rclone serve - Serve a remote over a protocol.
  2837  
  2838  Auto generated by spf13/cobra on 15-Jun-2019
  2839  
  2840  
  2841  rclone serve ftp
  2842  
  2843  Serve remote:path over FTP.
  2844  
  2845  Synopsis
  2846  
  2847  rclone serve ftp implements a basic ftp server to serve the remote over
  2848  FTP protocol. This can be viewed with a ftp client or you can make a
  2849  remote of type ftp to read and write it.
  2850  
  2851  Server options
  2852  
  2853  Use –addr to specify which IP address and port the server should listen
  2854  on, eg –addr 1.2.3.4:8000 or –addr :8080 to listen to all IPs. By
  2855  default it only listens on localhost. You can use port :0 to let the OS
  2856  choose an available port.
  2857  
  2858  If you set –addr to listen on a public or LAN accessible IP address then
  2859  using Authentication is advised - see the next section for info.
  2860  
  2861  Authentication
  2862  
  2863  By default this will serve files without needing a login.
  2864  
  2865  You can set a single username and password with the –user and –pass
  2866  flags.
  2867  
  2868  Directory Cache
  2869  
  2870  Using the --dir-cache-time flag, you can set how long a directory should
  2871  be considered up to date and not refreshed from the backend. Changes
  2872  made locally in the mount may appear immediately or invalidate the
  2873  cache. However, changes done on the remote will only be picked up once
  2874  the cache expires.
  2875  
  2876  Alternatively, you can send a SIGHUP signal to rclone for it to flush
  2877  all directory caches, regardless of how old they are. Assuming only one
  2878  rclone instance is running, you can reset the cache like this:
  2879  
  2880      kill -SIGHUP $(pidof rclone)
  2881  
  2882  If you configure rclone with a remote control then you can use rclone rc
  2883  to flush the whole directory cache:
  2884  
  2885      rclone rc vfs/forget
  2886  
  2887  Or individual files or directories:
  2888  
  2889      rclone rc vfs/forget file=path/to/file dir=path/to/dir
  2890  
  2891  File Buffering
  2892  
  2893  The --buffer-size flag determines the amount of memory, that will be
  2894  used to buffer data in advance.
  2895  
  2896  Each open file descriptor will try to keep the specified amount of data
  2897  in memory at all times. The buffered data is bound to one file
  2898  descriptor and won’t be shared between multiple open file descriptors of
  2899  the same file.
  2900  
  2901  This flag is a upper limit for the used memory per file descriptor. The
  2902  buffer will only use memory for data that is downloaded but not not yet
  2903  read. If the buffer is empty, only a small amount of memory will be
  2904  used. The maximum memory used by rclone for buffering can be up to
  2905  --buffer-size * open files.
  2906  
  2907  File Caching
  2908  
  2909  These flags control the VFS file caching options. The VFS layer is used
  2910  by rclone mount to make a cloud storage system work more like a normal
  2911  file system.
  2912  
  2913  You’ll need to enable VFS caching if you want, for example, to read and
  2914  write simultaneously to a file. See below for more details.
  2915  
  2916  Note that the VFS cache works in addition to the cache backend and you
  2917  may find that you need one or the other or both.
  2918  
  2919      --cache-dir string                   Directory rclone will use for caching.
  2920      --vfs-cache-max-age duration         Max age of objects in the cache. (default 1h0m0s)
  2921      --vfs-cache-mode string              Cache mode off|minimal|writes|full (default "off")
  2922      --vfs-cache-poll-interval duration   Interval to poll the cache for stale objects. (default 1m0s)
  2923      --vfs-cache-max-size int             Max total size of objects in the cache. (default off)
  2924  
  2925  If run with -vv rclone will print the location of the file cache. The
  2926  files are stored in the user cache file area which is OS dependent but
  2927  can be controlled with --cache-dir or setting the appropriate
  2928  environment variable.
  2929  
  2930  The cache has 4 different modes selected by --vfs-cache-mode. The higher
  2931  the cache mode the more compatible rclone becomes at the cost of using
  2932  disk space.
  2933  
  2934  Note that files are written back to the remote only when they are closed
  2935  so if rclone is quit or dies with open files then these won’t get
  2936  written back to the remote. However they will still be in the on disk
  2937  cache.
  2938  
  2939  If using –vfs-cache-max-size note that the cache may exceed this size
  2940  for two reasons. Firstly because it is only checked every
  2941  –vfs-cache-poll-interval. Secondly because open files cannot be evicted
  2942  from the cache.
  2943  
  2944  –vfs-cache-mode off
  2945  
  2946  In this mode the cache will read directly from the remote and write
  2947  directly to the remote without caching anything on disk.
  2948  
  2949  This will mean some operations are not possible
  2950  
  2951  -   Files can’t be opened for both read AND write
  2952  -   Files opened for write can’t be seeked
  2953  -   Existing files opened for write must have O_TRUNC set
  2954  -   Files open for read with O_TRUNC will be opened write only
  2955  -   Files open for write only will behave as if O_TRUNC was supplied
  2956  -   Open modes O_APPEND, O_TRUNC are ignored
  2957  -   If an upload fails it can’t be retried
  2958  
  2959  –vfs-cache-mode minimal
  2960  
  2961  This is very similar to “off” except that files opened for read AND
  2962  write will be buffered to disks. This means that files opened for write
  2963  will be a lot more compatible, but uses the minimal disk space.
  2964  
  2965  These operations are not possible
  2966  
  2967  -   Files opened for write only can’t be seeked
  2968  -   Existing files opened for write must have O_TRUNC set
  2969  -   Files opened for write only will ignore O_APPEND, O_TRUNC
  2970  -   If an upload fails it can’t be retried
  2971  
  2972  –vfs-cache-mode writes
  2973  
  2974  In this mode files opened for read only are still read directly from the
  2975  remote, write only and read/write files are buffered to disk first.
  2976  
  2977  This mode should support all normal file system operations.
  2978  
  2979  If an upload fails it will be retried up to –low-level-retries times.
  2980  
  2981  –vfs-cache-mode full
  2982  
  2983  In this mode all reads and writes are buffered to and from disk. When a
  2984  file is opened for read it will be downloaded in its entirety first.
  2985  
  2986  This may be appropriate for your needs, or you may prefer to look at the
  2987  cache backend which does a much more sophisticated job of caching,
  2988  including caching directory hierarchies and chunks of files.
  2989  
  2990  In this mode, unlike the others, when a file is written to the disk, it
  2991  will be kept on the disk after it is written to the remote. It will be
  2992  purged on a schedule according to --vfs-cache-max-age.
  2993  
  2994  This mode should support all normal file system operations.
  2995  
  2996  If an upload or download fails it will be retried up to
  2997  –low-level-retries times.
  2998  
  2999      rclone serve ftp remote:path [flags]
  3000  
  3001  Options
  3002  
  3003            --addr string                            IPaddress:Port or :Port to bind server to. (default "localhost:2121")
  3004            --dir-cache-time duration                Time to cache directory entries for. (default 5m0s)
  3005            --dir-perms FileMode                     Directory permissions (default 0777)
  3006            --file-perms FileMode                    File permissions (default 0666)
  3007            --gid uint32                             Override the gid field set by the filesystem. (default 1000)
  3008        -h, --help                                   help for ftp
  3009            --no-checksum                            Don't compare checksums on up/download.
  3010            --no-modtime                             Don't read/write the modification time (can speed things up).
  3011            --no-seek                                Don't allow seeking in files.
  3012            --pass string                            Password for authentication. (empty value allow every password)
  3013            --passive-port string                    Passive port range to use. (default "30000-32000")
  3014            --poll-interval duration                 Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
  3015            --public-ip string                       Public IP address to advertise for passive connections.
  3016            --read-only                              Mount read-only.
  3017            --uid uint32                             Override the uid field set by the filesystem. (default 1000)
  3018            --umask int                              Override the permission bits set by the filesystem. (default 2)
  3019            --user string                            User name for authentication. (default "anonymous")
  3020            --vfs-cache-max-age duration             Max age of objects in the cache. (default 1h0m0s)
  3021            --vfs-cache-max-size SizeSuffix          Max total size of objects in the cache. (default off)
  3022            --vfs-cache-mode CacheMode               Cache mode off|minimal|writes|full (default off)
  3023            --vfs-cache-poll-interval duration       Interval to poll the cache for stale objects. (default 1m0s)
  3024            --vfs-read-chunk-size SizeSuffix         Read the source objects in chunks. (default 128M)
  3025            --vfs-read-chunk-size-limit SizeSuffix   If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
  3026  
  3027  SEE ALSO
  3028  
  3029  -   rclone serve - Serve a remote over a protocol.
  3030  
  3031  Auto generated by spf13/cobra on 15-Jun-2019
  3032  
  3033  
  3034  rclone serve http
  3035  
  3036  Serve the remote over HTTP.
  3037  
  3038  Synopsis
  3039  
  3040  rclone serve http implements a basic web server to serve the remote over
  3041  HTTP. This can be viewed in a web browser or you can make a remote of
  3042  type http read from it.
  3043  
  3044  You can use the filter flags (eg –include, –exclude) to control what is
  3045  served.
  3046  
  3047  The server will log errors. Use -v to see access logs.
  3048  
  3049  –bwlimit will be respected for file transfers. Use –stats to control the
  3050  stats printing.
  3051  
  3052  Server options
  3053  
  3054  Use –addr to specify which IP address and port the server should listen
  3055  on, eg –addr 1.2.3.4:8000 or –addr :8080 to listen to all IPs. By
  3056  default it only listens on localhost. You can use port :0 to let the OS
  3057  choose an available port.
  3058  
  3059  If you set –addr to listen on a public or LAN accessible IP address then
  3060  using Authentication is advised - see the next section for info.
  3061  
  3062  –server-read-timeout and –server-write-timeout can be used to control
  3063  the timeouts on the server. Note that this is the total time for a
  3064  transfer.
  3065  
  3066  –max-header-bytes controls the maximum number of bytes the server will
  3067  accept in the HTTP header.
  3068  
  3069  Authentication
  3070  
  3071  By default this will serve files without needing a login.
  3072  
  3073  You can either use an htpasswd file which can take lots of users, or set
  3074  a single username and password with the –user and –pass flags.
  3075  
  3076  Use –htpasswd /path/to/htpasswd to provide an htpasswd file. This is in
  3077  standard apache format and supports MD5, SHA1 and BCrypt for basic
  3078  authentication. Bcrypt is recommended.
  3079  
  3080  To create an htpasswd file:
  3081  
  3082      touch htpasswd
  3083      htpasswd -B htpasswd user
  3084      htpasswd -B htpasswd anotherUser
  3085  
  3086  The password file can be updated while rclone is running.
  3087  
  3088  Use –realm to set the authentication realm.
  3089  
  3090  SSL/TLS
  3091  
  3092  By default this will serve over http. If you want you can serve over
  3093  https. You will need to supply the –cert and –key flags. If you wish to
  3094  do client side certificate validation then you will need to supply
  3095  –client-ca also.
  3096  
  3097  –cert should be a either a PEM encoded certificate or a concatenation of
  3098  that with the CA certificate. –key should be the PEM encoded private key
  3099  and –client-ca should be the PEM encoded client certificate authority
  3100  certificate.
  3101  
  3102  Directory Cache
  3103  
  3104  Using the --dir-cache-time flag, you can set how long a directory should
  3105  be considered up to date and not refreshed from the backend. Changes
  3106  made locally in the mount may appear immediately or invalidate the
  3107  cache. However, changes done on the remote will only be picked up once
  3108  the cache expires.
  3109  
  3110  Alternatively, you can send a SIGHUP signal to rclone for it to flush
  3111  all directory caches, regardless of how old they are. Assuming only one
  3112  rclone instance is running, you can reset the cache like this:
  3113  
  3114      kill -SIGHUP $(pidof rclone)
  3115  
  3116  If you configure rclone with a remote control then you can use rclone rc
  3117  to flush the whole directory cache:
  3118  
  3119      rclone rc vfs/forget
  3120  
  3121  Or individual files or directories:
  3122  
  3123      rclone rc vfs/forget file=path/to/file dir=path/to/dir
  3124  
  3125  File Buffering
  3126  
  3127  The --buffer-size flag determines the amount of memory, that will be
  3128  used to buffer data in advance.
  3129  
  3130  Each open file descriptor will try to keep the specified amount of data
  3131  in memory at all times. The buffered data is bound to one file
  3132  descriptor and won’t be shared between multiple open file descriptors of
  3133  the same file.
  3134  
  3135  This flag is a upper limit for the used memory per file descriptor. The
  3136  buffer will only use memory for data that is downloaded but not not yet
  3137  read. If the buffer is empty, only a small amount of memory will be
  3138  used. The maximum memory used by rclone for buffering can be up to
  3139  --buffer-size * open files.
  3140  
  3141  File Caching
  3142  
  3143  These flags control the VFS file caching options. The VFS layer is used
  3144  by rclone mount to make a cloud storage system work more like a normal
  3145  file system.
  3146  
  3147  You’ll need to enable VFS caching if you want, for example, to read and
  3148  write simultaneously to a file. See below for more details.
  3149  
  3150  Note that the VFS cache works in addition to the cache backend and you
  3151  may find that you need one or the other or both.
  3152  
  3153      --cache-dir string                   Directory rclone will use for caching.
  3154      --vfs-cache-max-age duration         Max age of objects in the cache. (default 1h0m0s)
  3155      --vfs-cache-mode string              Cache mode off|minimal|writes|full (default "off")
  3156      --vfs-cache-poll-interval duration   Interval to poll the cache for stale objects. (default 1m0s)
  3157      --vfs-cache-max-size int             Max total size of objects in the cache. (default off)
  3158  
  3159  If run with -vv rclone will print the location of the file cache. The
  3160  files are stored in the user cache file area which is OS dependent but
  3161  can be controlled with --cache-dir or setting the appropriate
  3162  environment variable.
  3163  
  3164  The cache has 4 different modes selected by --vfs-cache-mode. The higher
  3165  the cache mode the more compatible rclone becomes at the cost of using
  3166  disk space.
  3167  
  3168  Note that files are written back to the remote only when they are closed
  3169  so if rclone is quit or dies with open files then these won’t get
  3170  written back to the remote. However they will still be in the on disk
  3171  cache.
  3172  
  3173  If using –vfs-cache-max-size note that the cache may exceed this size
  3174  for two reasons. Firstly because it is only checked every
  3175  –vfs-cache-poll-interval. Secondly because open files cannot be evicted
  3176  from the cache.
  3177  
  3178  –vfs-cache-mode off
  3179  
  3180  In this mode the cache will read directly from the remote and write
  3181  directly to the remote without caching anything on disk.
  3182  
  3183  This will mean some operations are not possible
  3184  
  3185  -   Files can’t be opened for both read AND write
  3186  -   Files opened for write can’t be seeked
  3187  -   Existing files opened for write must have O_TRUNC set
  3188  -   Files open for read with O_TRUNC will be opened write only
  3189  -   Files open for write only will behave as if O_TRUNC was supplied
  3190  -   Open modes O_APPEND, O_TRUNC are ignored
  3191  -   If an upload fails it can’t be retried
  3192  
  3193  –vfs-cache-mode minimal
  3194  
  3195  This is very similar to “off” except that files opened for read AND
  3196  write will be buffered to disks. This means that files opened for write
  3197  will be a lot more compatible, but uses the minimal disk space.
  3198  
  3199  These operations are not possible
  3200  
  3201  -   Files opened for write only can’t be seeked
  3202  -   Existing files opened for write must have O_TRUNC set
  3203  -   Files opened for write only will ignore O_APPEND, O_TRUNC
  3204  -   If an upload fails it can’t be retried
  3205  
  3206  –vfs-cache-mode writes
  3207  
  3208  In this mode files opened for read only are still read directly from the
  3209  remote, write only and read/write files are buffered to disk first.
  3210  
  3211  This mode should support all normal file system operations.
  3212  
  3213  If an upload fails it will be retried up to –low-level-retries times.
  3214  
  3215  –vfs-cache-mode full
  3216  
  3217  In this mode all reads and writes are buffered to and from disk. When a
  3218  file is opened for read it will be downloaded in its entirety first.
  3219  
  3220  This may be appropriate for your needs, or you may prefer to look at the
  3221  cache backend which does a much more sophisticated job of caching,
  3222  including caching directory hierarchies and chunks of files.
  3223  
  3224  In this mode, unlike the others, when a file is written to the disk, it
  3225  will be kept on the disk after it is written to the remote. It will be
  3226  purged on a schedule according to --vfs-cache-max-age.
  3227  
  3228  This mode should support all normal file system operations.
  3229  
  3230  If an upload or download fails it will be retried up to
  3231  –low-level-retries times.
  3232  
  3233      rclone serve http remote:path [flags]
  3234  
  3235  Options
  3236  
  3237            --addr string                            IPaddress:Port or :Port to bind server to. (default "localhost:8080")
  3238            --cert string                            SSL PEM key (concatenation of certificate and CA certificate)
  3239            --client-ca string                       Client certificate authority to verify clients with
  3240            --dir-cache-time duration                Time to cache directory entries for. (default 5m0s)
  3241            --dir-perms FileMode                     Directory permissions (default 0777)
  3242            --file-perms FileMode                    File permissions (default 0666)
  3243            --gid uint32                             Override the gid field set by the filesystem. (default 1000)
  3244        -h, --help                                   help for http
  3245            --htpasswd string                        htpasswd file - if not provided no authentication is done
  3246            --key string                             SSL PEM Private key
  3247            --max-header-bytes int                   Maximum size of request header (default 4096)
  3248            --no-checksum                            Don't compare checksums on up/download.
  3249            --no-modtime                             Don't read/write the modification time (can speed things up).
  3250            --no-seek                                Don't allow seeking in files.
  3251            --pass string                            Password for authentication.
  3252            --poll-interval duration                 Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
  3253            --read-only                              Mount read-only.
  3254            --realm string                           realm for authentication (default "rclone")
  3255            --server-read-timeout duration           Timeout for server reading data (default 1h0m0s)
  3256            --server-write-timeout duration          Timeout for server writing data (default 1h0m0s)
  3257            --uid uint32                             Override the uid field set by the filesystem. (default 1000)
  3258            --umask int                              Override the permission bits set by the filesystem. (default 2)
  3259            --user string                            User name for authentication.
  3260            --vfs-cache-max-age duration             Max age of objects in the cache. (default 1h0m0s)
  3261            --vfs-cache-max-size SizeSuffix          Max total size of objects in the cache. (default off)
  3262            --vfs-cache-mode CacheMode               Cache mode off|minimal|writes|full (default off)
  3263            --vfs-cache-poll-interval duration       Interval to poll the cache for stale objects. (default 1m0s)
  3264            --vfs-read-chunk-size SizeSuffix         Read the source objects in chunks. (default 128M)
  3265            --vfs-read-chunk-size-limit SizeSuffix   If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
  3266  
  3267  SEE ALSO
  3268  
  3269  -   rclone serve - Serve a remote over a protocol.
  3270  
  3271  Auto generated by spf13/cobra on 15-Jun-2019
  3272  
  3273  
  3274  rclone serve restic
  3275  
  3276  Serve the remote for restic’s REST API.
  3277  
  3278  Synopsis
  3279  
  3280  rclone serve restic implements restic’s REST backend API over HTTP. This
  3281  allows restic to use rclone as a data storage mechanism for cloud
  3282  providers that restic does not support directly.
  3283  
  3284  Restic is a command line program for doing backups.
  3285  
  3286  The server will log errors. Use -v to see access logs.
  3287  
  3288  –bwlimit will be respected for file transfers. Use –stats to control the
  3289  stats printing.
  3290  
  3291  Setting up rclone for use by restic
  3292  
  3293  First set up a remote for your chosen cloud provider.
  3294  
  3295  Once you have set up the remote, check it is working with, for example
  3296  “rclone lsd remote:”. You may have called the remote something other
  3297  than “remote:” - just substitute whatever you called it in the following
  3298  instructions.
  3299  
  3300  Now start the rclone restic server
  3301  
  3302      rclone serve restic -v remote:backup
  3303  
  3304  Where you can replace “backup” in the above by whatever path in the
  3305  remote you wish to use.
  3306  
  3307  By default this will serve on “localhost:8080” you can change this with
  3308  use of the “–addr” flag.
  3309  
  3310  You might wish to start this server on boot.
  3311  
  3312  Setting up restic to use rclone
  3313  
  3314  Now you can follow the restic instructions on setting up restic.
  3315  
  3316  Note that you will need restic 0.8.2 or later to interoperate with
  3317  rclone.
  3318  
  3319  For the example above you will want to use “http://localhost:8080/” as
  3320  the URL for the REST server.
  3321  
  3322  For example:
  3323  
  3324      $ export RESTIC_REPOSITORY=rest:http://localhost:8080/
  3325      $ export RESTIC_PASSWORD=yourpassword
  3326      $ restic init
  3327      created restic backend 8b1a4b56ae at rest:http://localhost:8080/
  3328  
  3329      Please note that knowledge of your password is required to access
  3330      the repository. Losing your password means that your data is
  3331      irrecoverably lost.
  3332      $ restic backup /path/to/files/to/backup
  3333      scan [/path/to/files/to/backup]
  3334      scanned 189 directories, 312 files in 0:00
  3335      [0:00] 100.00%  38.128 MiB / 38.128 MiB  501 / 501 items  0 errors  ETA 0:00
  3336      duration: 0:00
  3337      snapshot 45c8fdd8 saved
  3338  
  3339  Multiple repositories
  3340  
  3341  Note that you can use the endpoint to host multiple repositories. Do
  3342  this by adding a directory name or path after the URL. Note that these
  3343  MUST end with /. Eg
  3344  
  3345      $ export RESTIC_REPOSITORY=rest:http://localhost:8080/user1repo/
  3346      # backup user1 stuff
  3347      $ export RESTIC_REPOSITORY=rest:http://localhost:8080/user2repo/
  3348      # backup user2 stuff
  3349  
  3350  Private repositories
  3351  
  3352  The “–private-repos” flag can be used to limit users to repositories
  3353  starting with a path of “//”.
  3354  
  3355  Server options
  3356  
  3357  Use –addr to specify which IP address and port the server should listen
  3358  on, eg –addr 1.2.3.4:8000 or –addr :8080 to listen to all IPs. By
  3359  default it only listens on localhost. You can use port :0 to let the OS
  3360  choose an available port.
  3361  
  3362  If you set –addr to listen on a public or LAN accessible IP address then
  3363  using Authentication is advised - see the next section for info.
  3364  
  3365  –server-read-timeout and –server-write-timeout can be used to control
  3366  the timeouts on the server. Note that this is the total time for a
  3367  transfer.
  3368  
  3369  –max-header-bytes controls the maximum number of bytes the server will
  3370  accept in the HTTP header.
  3371  
  3372  Authentication
  3373  
  3374  By default this will serve files without needing a login.
  3375  
  3376  You can either use an htpasswd file which can take lots of users, or set
  3377  a single username and password with the –user and –pass flags.
  3378  
  3379  Use –htpasswd /path/to/htpasswd to provide an htpasswd file. This is in
  3380  standard apache format and supports MD5, SHA1 and BCrypt for basic
  3381  authentication. Bcrypt is recommended.
  3382  
  3383  To create an htpasswd file:
  3384  
  3385      touch htpasswd
  3386      htpasswd -B htpasswd user
  3387      htpasswd -B htpasswd anotherUser
  3388  
  3389  The password file can be updated while rclone is running.
  3390  
  3391  Use –realm to set the authentication realm.
  3392  
  3393  SSL/TLS
  3394  
  3395  By default this will serve over http. If you want you can serve over
  3396  https. You will need to supply the –cert and –key flags. If you wish to
  3397  do client side certificate validation then you will need to supply
  3398  –client-ca also.
  3399  
  3400  –cert should be a either a PEM encoded certificate or a concatenation of
  3401  that with the CA certificate. –key should be the PEM encoded private key
  3402  and –client-ca should be the PEM encoded client certificate authority
  3403  certificate.
  3404  
  3405      rclone serve restic remote:path [flags]
  3406  
  3407  Options
  3408  
  3409            --addr string                     IPaddress:Port or :Port to bind server to. (default "localhost:8080")
  3410            --append-only                     disallow deletion of repository data
  3411            --cert string                     SSL PEM key (concatenation of certificate and CA certificate)
  3412            --client-ca string                Client certificate authority to verify clients with
  3413        -h, --help                            help for restic
  3414            --htpasswd string                 htpasswd file - if not provided no authentication is done
  3415            --key string                      SSL PEM Private key
  3416            --max-header-bytes int            Maximum size of request header (default 4096)
  3417            --pass string                     Password for authentication.
  3418            --private-repos                   users can only access their private repo
  3419            --realm string                    realm for authentication (default "rclone")
  3420            --server-read-timeout duration    Timeout for server reading data (default 1h0m0s)
  3421            --server-write-timeout duration   Timeout for server writing data (default 1h0m0s)
  3422            --stdio                           run an HTTP2 server on stdin/stdout
  3423            --user string                     User name for authentication.
  3424  
  3425  SEE ALSO
  3426  
  3427  -   rclone serve - Serve a remote over a protocol.
  3428  
  3429  Auto generated by spf13/cobra on 15-Jun-2019
  3430  
  3431  
  3432  rclone serve sftp
  3433  
  3434  Serve the remote over SFTP.
  3435  
  3436  Synopsis
  3437  
  3438  rclone serve sftp implements an SFTP server to serve the remote over
  3439  SFTP. This can be used with an SFTP client or you can make a remote of
  3440  type sftp to use with it.
  3441  
  3442  You can use the filter flags (eg –include, –exclude) to control what is
  3443  served.
  3444  
  3445  The server will log errors. Use -v to see access logs.
  3446  
  3447  –bwlimit will be respected for file transfers. Use –stats to control the
  3448  stats printing.
  3449  
  3450  You must provide some means of authentication, either with –user/–pass,
  3451  an authorized keys file (specify location with –authorized-keys - the
  3452  default is the same as ssh) or set the –no-auth flag for no
  3453  authentication when logging in.
  3454  
  3455  Note that this also implements a small number of shell commands so that
  3456  it can provide md5sum/sha1sum/df information for the rclone sftp
  3457  backend. This means that is can support SHA1SUMs, MD5SUMs and the about
  3458  command when paired with the rclone sftp backend.
  3459  
  3460  If you don’t supply a –key then rclone will generate one and cache it
  3461  for later use.
  3462  
  3463  By default the server binds to localhost:2022 - if you want it to be
  3464  reachable externally then supply “–addr :2022” for example.
  3465  
  3466  Note that the default of “–vfs-cache-mode off” is fine for the rclone
  3467  sftp backend, but it may not be with other SFTP clients.
  3468  
  3469  Directory Cache
  3470  
  3471  Using the --dir-cache-time flag, you can set how long a directory should
  3472  be considered up to date and not refreshed from the backend. Changes
  3473  made locally in the mount may appear immediately or invalidate the
  3474  cache. However, changes done on the remote will only be picked up once
  3475  the cache expires.
  3476  
  3477  Alternatively, you can send a SIGHUP signal to rclone for it to flush
  3478  all directory caches, regardless of how old they are. Assuming only one
  3479  rclone instance is running, you can reset the cache like this:
  3480  
  3481      kill -SIGHUP $(pidof rclone)
  3482  
  3483  If you configure rclone with a remote control then you can use rclone rc
  3484  to flush the whole directory cache:
  3485  
  3486      rclone rc vfs/forget
  3487  
  3488  Or individual files or directories:
  3489  
  3490      rclone rc vfs/forget file=path/to/file dir=path/to/dir
  3491  
  3492  File Buffering
  3493  
  3494  The --buffer-size flag determines the amount of memory, that will be
  3495  used to buffer data in advance.
  3496  
  3497  Each open file descriptor will try to keep the specified amount of data
  3498  in memory at all times. The buffered data is bound to one file
  3499  descriptor and won’t be shared between multiple open file descriptors of
  3500  the same file.
  3501  
  3502  This flag is a upper limit for the used memory per file descriptor. The
  3503  buffer will only use memory for data that is downloaded but not not yet
  3504  read. If the buffer is empty, only a small amount of memory will be
  3505  used. The maximum memory used by rclone for buffering can be up to
  3506  --buffer-size * open files.
  3507  
  3508  File Caching
  3509  
  3510  These flags control the VFS file caching options. The VFS layer is used
  3511  by rclone mount to make a cloud storage system work more like a normal
  3512  file system.
  3513  
  3514  You’ll need to enable VFS caching if you want, for example, to read and
  3515  write simultaneously to a file. See below for more details.
  3516  
  3517  Note that the VFS cache works in addition to the cache backend and you
  3518  may find that you need one or the other or both.
  3519  
  3520      --cache-dir string                   Directory rclone will use for caching.
  3521      --vfs-cache-max-age duration         Max age of objects in the cache. (default 1h0m0s)
  3522      --vfs-cache-mode string              Cache mode off|minimal|writes|full (default "off")
  3523      --vfs-cache-poll-interval duration   Interval to poll the cache for stale objects. (default 1m0s)
  3524      --vfs-cache-max-size int             Max total size of objects in the cache. (default off)
  3525  
  3526  If run with -vv rclone will print the location of the file cache. The
  3527  files are stored in the user cache file area which is OS dependent but
  3528  can be controlled with --cache-dir or setting the appropriate
  3529  environment variable.
  3530  
  3531  The cache has 4 different modes selected by --vfs-cache-mode. The higher
  3532  the cache mode the more compatible rclone becomes at the cost of using
  3533  disk space.
  3534  
  3535  Note that files are written back to the remote only when they are closed
  3536  so if rclone is quit or dies with open files then these won’t get
  3537  written back to the remote. However they will still be in the on disk
  3538  cache.
  3539  
  3540  If using –vfs-cache-max-size note that the cache may exceed this size
  3541  for two reasons. Firstly because it is only checked every
  3542  –vfs-cache-poll-interval. Secondly because open files cannot be evicted
  3543  from the cache.
  3544  
  3545  –vfs-cache-mode off
  3546  
  3547  In this mode the cache will read directly from the remote and write
  3548  directly to the remote without caching anything on disk.
  3549  
  3550  This will mean some operations are not possible
  3551  
  3552  -   Files can’t be opened for both read AND write
  3553  -   Files opened for write can’t be seeked
  3554  -   Existing files opened for write must have O_TRUNC set
  3555  -   Files open for read with O_TRUNC will be opened write only
  3556  -   Files open for write only will behave as if O_TRUNC was supplied
  3557  -   Open modes O_APPEND, O_TRUNC are ignored
  3558  -   If an upload fails it can’t be retried
  3559  
  3560  –vfs-cache-mode minimal
  3561  
  3562  This is very similar to “off” except that files opened for read AND
  3563  write will be buffered to disks. This means that files opened for write
  3564  will be a lot more compatible, but uses the minimal disk space.
  3565  
  3566  These operations are not possible
  3567  
  3568  -   Files opened for write only can’t be seeked
  3569  -   Existing files opened for write must have O_TRUNC set
  3570  -   Files opened for write only will ignore O_APPEND, O_TRUNC
  3571  -   If an upload fails it can’t be retried
  3572  
  3573  –vfs-cache-mode writes
  3574  
  3575  In this mode files opened for read only are still read directly from the
  3576  remote, write only and read/write files are buffered to disk first.
  3577  
  3578  This mode should support all normal file system operations.
  3579  
  3580  If an upload fails it will be retried up to –low-level-retries times.
  3581  
  3582  –vfs-cache-mode full
  3583  
  3584  In this mode all reads and writes are buffered to and from disk. When a
  3585  file is opened for read it will be downloaded in its entirety first.
  3586  
  3587  This may be appropriate for your needs, or you may prefer to look at the
  3588  cache backend which does a much more sophisticated job of caching,
  3589  including caching directory hierarchies and chunks of files.
  3590  
  3591  In this mode, unlike the others, when a file is written to the disk, it
  3592  will be kept on the disk after it is written to the remote. It will be
  3593  purged on a schedule according to --vfs-cache-max-age.
  3594  
  3595  This mode should support all normal file system operations.
  3596  
  3597  If an upload or download fails it will be retried up to
  3598  –low-level-retries times.
  3599  
  3600      rclone serve sftp remote:path [flags]
  3601  
  3602  Options
  3603  
  3604            --addr string                            IPaddress:Port or :Port to bind server to. (default "localhost:2022")
  3605            --authorized-keys string                 Authorized keys file (default "~/.ssh/authorized_keys")
  3606            --dir-cache-time duration                Time to cache directory entries for. (default 5m0s)
  3607            --dir-perms FileMode                     Directory permissions (default 0777)
  3608            --file-perms FileMode                    File permissions (default 0666)
  3609            --gid uint32                             Override the gid field set by the filesystem. (default 1000)
  3610        -h, --help                                   help for sftp
  3611            --key string                             SSH private key file (leave blank to auto generate)
  3612            --no-auth                                Allow connections with no authentication if set.
  3613            --no-checksum                            Don't compare checksums on up/download.
  3614            --no-modtime                             Don't read/write the modification time (can speed things up).
  3615            --no-seek                                Don't allow seeking in files.
  3616            --pass string                            Password for authentication.
  3617            --poll-interval duration                 Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
  3618            --read-only                              Mount read-only.
  3619            --uid uint32                             Override the uid field set by the filesystem. (default 1000)
  3620            --umask int                              Override the permission bits set by the filesystem. (default 2)
  3621            --user string                            User name for authentication.
  3622            --vfs-cache-max-age duration             Max age of objects in the cache. (default 1h0m0s)
  3623            --vfs-cache-max-size SizeSuffix          Max total size of objects in the cache. (default off)
  3624            --vfs-cache-mode CacheMode               Cache mode off|minimal|writes|full (default off)
  3625            --vfs-cache-poll-interval duration       Interval to poll the cache for stale objects. (default 1m0s)
  3626            --vfs-read-chunk-size SizeSuffix         Read the source objects in chunks. (default 128M)
  3627            --vfs-read-chunk-size-limit SizeSuffix   If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
  3628  
  3629  SEE ALSO
  3630  
  3631  -   rclone serve - Serve a remote over a protocol.
  3632  
  3633  Auto generated by spf13/cobra on 15-Jun-2019
  3634  
  3635  
  3636  rclone serve webdav
  3637  
  3638  Serve remote:path over webdav.
  3639  
  3640  Synopsis
  3641  
  3642  rclone serve webdav implements a basic webdav server to serve the remote
  3643  over HTTP via the webdav protocol. This can be viewed with a webdav
  3644  client, through a web browser, or you can make a remote of type webdav
  3645  to read and write it.
  3646  
  3647  Webdav options
  3648  
  3649  –etag-hash
  3650  
  3651  This controls the ETag header. Without this flag the ETag will be based
  3652  on the ModTime and Size of the object.
  3653  
  3654  If this flag is set to “auto” then rclone will choose the first
  3655  supported hash on the backend or you can use a named hash such as “MD5”
  3656  or “SHA-1”.
  3657  
  3658  Use “rclone hashsum” to see the full list.
  3659  
  3660  Server options
  3661  
  3662  Use –addr to specify which IP address and port the server should listen
  3663  on, eg –addr 1.2.3.4:8000 or –addr :8080 to listen to all IPs. By
  3664  default it only listens on localhost. You can use port :0 to let the OS
  3665  choose an available port.
  3666  
  3667  If you set –addr to listen on a public or LAN accessible IP address then
  3668  using Authentication is advised - see the next section for info.
  3669  
  3670  –server-read-timeout and –server-write-timeout can be used to control
  3671  the timeouts on the server. Note that this is the total time for a
  3672  transfer.
  3673  
  3674  –max-header-bytes controls the maximum number of bytes the server will
  3675  accept in the HTTP header.
  3676  
  3677  Authentication
  3678  
  3679  By default this will serve files without needing a login.
  3680  
  3681  You can either use an htpasswd file which can take lots of users, or set
  3682  a single username and password with the –user and –pass flags.
  3683  
  3684  Use –htpasswd /path/to/htpasswd to provide an htpasswd file. This is in
  3685  standard apache format and supports MD5, SHA1 and BCrypt for basic
  3686  authentication. Bcrypt is recommended.
  3687  
  3688  To create an htpasswd file:
  3689  
  3690      touch htpasswd
  3691      htpasswd -B htpasswd user
  3692      htpasswd -B htpasswd anotherUser
  3693  
  3694  The password file can be updated while rclone is running.
  3695  
  3696  Use –realm to set the authentication realm.
  3697  
  3698  SSL/TLS
  3699  
  3700  By default this will serve over http. If you want you can serve over
  3701  https. You will need to supply the –cert and –key flags. If you wish to
  3702  do client side certificate validation then you will need to supply
  3703  –client-ca also.
  3704  
  3705  –cert should be a either a PEM encoded certificate or a concatenation of
  3706  that with the CA certificate. –key should be the PEM encoded private key
  3707  and –client-ca should be the PEM encoded client certificate authority
  3708  certificate.
  3709  
  3710  Directory Cache
  3711  
  3712  Using the --dir-cache-time flag, you can set how long a directory should
  3713  be considered up to date and not refreshed from the backend. Changes
  3714  made locally in the mount may appear immediately or invalidate the
  3715  cache. However, changes done on the remote will only be picked up once
  3716  the cache expires.
  3717  
  3718  Alternatively, you can send a SIGHUP signal to rclone for it to flush
  3719  all directory caches, regardless of how old they are. Assuming only one
  3720  rclone instance is running, you can reset the cache like this:
  3721  
  3722      kill -SIGHUP $(pidof rclone)
  3723  
  3724  If you configure rclone with a remote control then you can use rclone rc
  3725  to flush the whole directory cache:
  3726  
  3727      rclone rc vfs/forget
  3728  
  3729  Or individual files or directories:
  3730  
  3731      rclone rc vfs/forget file=path/to/file dir=path/to/dir
  3732  
  3733  File Buffering
  3734  
  3735  The --buffer-size flag determines the amount of memory, that will be
  3736  used to buffer data in advance.
  3737  
  3738  Each open file descriptor will try to keep the specified amount of data
  3739  in memory at all times. The buffered data is bound to one file
  3740  descriptor and won’t be shared between multiple open file descriptors of
  3741  the same file.
  3742  
  3743  This flag is a upper limit for the used memory per file descriptor. The
  3744  buffer will only use memory for data that is downloaded but not not yet
  3745  read. If the buffer is empty, only a small amount of memory will be
  3746  used. The maximum memory used by rclone for buffering can be up to
  3747  --buffer-size * open files.
  3748  
  3749  File Caching
  3750  
  3751  These flags control the VFS file caching options. The VFS layer is used
  3752  by rclone mount to make a cloud storage system work more like a normal
  3753  file system.
  3754  
  3755  You’ll need to enable VFS caching if you want, for example, to read and
  3756  write simultaneously to a file. See below for more details.
  3757  
  3758  Note that the VFS cache works in addition to the cache backend and you
  3759  may find that you need one or the other or both.
  3760  
  3761      --cache-dir string                   Directory rclone will use for caching.
  3762      --vfs-cache-max-age duration         Max age of objects in the cache. (default 1h0m0s)
  3763      --vfs-cache-mode string              Cache mode off|minimal|writes|full (default "off")
  3764      --vfs-cache-poll-interval duration   Interval to poll the cache for stale objects. (default 1m0s)
  3765      --vfs-cache-max-size int             Max total size of objects in the cache. (default off)
  3766  
  3767  If run with -vv rclone will print the location of the file cache. The
  3768  files are stored in the user cache file area which is OS dependent but
  3769  can be controlled with --cache-dir or setting the appropriate
  3770  environment variable.
  3771  
  3772  The cache has 4 different modes selected by --vfs-cache-mode. The higher
  3773  the cache mode the more compatible rclone becomes at the cost of using
  3774  disk space.
  3775  
  3776  Note that files are written back to the remote only when they are closed
  3777  so if rclone is quit or dies with open files then these won’t get
  3778  written back to the remote. However they will still be in the on disk
  3779  cache.
  3780  
  3781  If using –vfs-cache-max-size note that the cache may exceed this size
  3782  for two reasons. Firstly because it is only checked every
  3783  –vfs-cache-poll-interval. Secondly because open files cannot be evicted
  3784  from the cache.
  3785  
  3786  –vfs-cache-mode off
  3787  
  3788  In this mode the cache will read directly from the remote and write
  3789  directly to the remote without caching anything on disk.
  3790  
  3791  This will mean some operations are not possible
  3792  
  3793  -   Files can’t be opened for both read AND write
  3794  -   Files opened for write can’t be seeked
  3795  -   Existing files opened for write must have O_TRUNC set
  3796  -   Files open for read with O_TRUNC will be opened write only
  3797  -   Files open for write only will behave as if O_TRUNC was supplied
  3798  -   Open modes O_APPEND, O_TRUNC are ignored
  3799  -   If an upload fails it can’t be retried
  3800  
  3801  –vfs-cache-mode minimal
  3802  
  3803  This is very similar to “off” except that files opened for read AND
  3804  write will be buffered to disks. This means that files opened for write
  3805  will be a lot more compatible, but uses the minimal disk space.
  3806  
  3807  These operations are not possible
  3808  
  3809  -   Files opened for write only can’t be seeked
  3810  -   Existing files opened for write must have O_TRUNC set
  3811  -   Files opened for write only will ignore O_APPEND, O_TRUNC
  3812  -   If an upload fails it can’t be retried
  3813  
  3814  –vfs-cache-mode writes
  3815  
  3816  In this mode files opened for read only are still read directly from the
  3817  remote, write only and read/write files are buffered to disk first.
  3818  
  3819  This mode should support all normal file system operations.
  3820  
  3821  If an upload fails it will be retried up to –low-level-retries times.
  3822  
  3823  –vfs-cache-mode full
  3824  
  3825  In this mode all reads and writes are buffered to and from disk. When a
  3826  file is opened for read it will be downloaded in its entirety first.
  3827  
  3828  This may be appropriate for your needs, or you may prefer to look at the
  3829  cache backend which does a much more sophisticated job of caching,
  3830  including caching directory hierarchies and chunks of files.
  3831  
  3832  In this mode, unlike the others, when a file is written to the disk, it
  3833  will be kept on the disk after it is written to the remote. It will be
  3834  purged on a schedule according to --vfs-cache-max-age.
  3835  
  3836  This mode should support all normal file system operations.
  3837  
  3838  If an upload or download fails it will be retried up to
  3839  –low-level-retries times.
  3840  
  3841      rclone serve webdav remote:path [flags]
  3842  
  3843  Options
  3844  
  3845            --addr string                            IPaddress:Port or :Port to bind server to. (default "localhost:8080")
  3846            --cert string                            SSL PEM key (concatenation of certificate and CA certificate)
  3847            --client-ca string                       Client certificate authority to verify clients with
  3848            --dir-cache-time duration                Time to cache directory entries for. (default 5m0s)
  3849            --dir-perms FileMode                     Directory permissions (default 0777)
  3850            --disable-dir-list                       Disable HTML directory list on GET request for a directory
  3851            --etag-hash string                       Which hash to use for the ETag, or auto or blank for off
  3852            --file-perms FileMode                    File permissions (default 0666)
  3853            --gid uint32                             Override the gid field set by the filesystem. (default 1000)
  3854        -h, --help                                   help for webdav
  3855            --htpasswd string                        htpasswd file - if not provided no authentication is done
  3856            --key string                             SSL PEM Private key
  3857            --max-header-bytes int                   Maximum size of request header (default 4096)
  3858            --no-checksum                            Don't compare checksums on up/download.
  3859            --no-modtime                             Don't read/write the modification time (can speed things up).
  3860            --no-seek                                Don't allow seeking in files.
  3861            --pass string                            Password for authentication.
  3862            --poll-interval duration                 Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
  3863            --read-only                              Mount read-only.
  3864            --realm string                           realm for authentication (default "rclone")
  3865            --server-read-timeout duration           Timeout for server reading data (default 1h0m0s)
  3866            --server-write-timeout duration          Timeout for server writing data (default 1h0m0s)
  3867            --uid uint32                             Override the uid field set by the filesystem. (default 1000)
  3868            --umask int                              Override the permission bits set by the filesystem. (default 2)
  3869            --user string                            User name for authentication.
  3870            --vfs-cache-max-age duration             Max age of objects in the cache. (default 1h0m0s)
  3871            --vfs-cache-max-size SizeSuffix          Max total size of objects in the cache. (default off)
  3872            --vfs-cache-mode CacheMode               Cache mode off|minimal|writes|full (default off)
  3873            --vfs-cache-poll-interval duration       Interval to poll the cache for stale objects. (default 1m0s)
  3874            --vfs-read-chunk-size SizeSuffix         Read the source objects in chunks. (default 128M)
  3875            --vfs-read-chunk-size-limit SizeSuffix   If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
  3876  
  3877  SEE ALSO
  3878  
  3879  -   rclone serve - Serve a remote over a protocol.
  3880  
  3881  Auto generated by spf13/cobra on 15-Jun-2019
  3882  
  3883  
  3884  rclone settier
  3885  
  3886  Changes storage class/tier of objects in remote.
  3887  
  3888  Synopsis
  3889  
  3890  rclone settier changes storage tier or class at remote if supported. Few
  3891  cloud storage services provides different storage classes on objects,
  3892  for example AWS S3 and Glacier, Azure Blob storage - Hot, Cool and
  3893  Archive, Google Cloud Storage, Regional Storage, Nearline, Coldline etc.
  3894  
  3895  Note that, certain tier changes make objects not available to access
  3896  immediately. For example tiering to archive in azure blob storage makes
  3897  objects in frozen state, user can restore by setting tier to Hot/Cool,
  3898  similarly S3 to Glacier makes object inaccessible.true
  3899  
  3900  You can use it to tier single object
  3901  
  3902      rclone settier Cool remote:path/file
  3903  
  3904  Or use rclone filters to set tier on only specific files
  3905  
  3906      rclone --include "*.txt" settier Hot remote:path/dir
  3907  
  3908  Or just provide remote directory and all files in directory will be
  3909  tiered
  3910  
  3911      rclone settier tier remote:path/dir
  3912  
  3913      rclone settier tier remote:path [flags]
  3914  
  3915  Options
  3916  
  3917        -h, --help   help for settier
  3918  
  3919  SEE ALSO
  3920  
  3921  -   rclone - Show help for rclone commands, flags and backends.
  3922  
  3923  Auto generated by spf13/cobra on 15-Jun-2019
  3924  
  3925  
  3926  rclone touch
  3927  
  3928  Create new file or change file modification time.
  3929  
  3930  Synopsis
  3931  
  3932  Create new file or change file modification time.
  3933  
  3934      rclone touch remote:path [flags]
  3935  
  3936  Options
  3937  
  3938        -h, --help               help for touch
  3939        -C, --no-create          Do not create the file if it does not exist.
  3940        -t, --timestamp string   Change the modification times to the specified time instead of the current time of day. The argument is of the form 'YYMMDD' (ex. 17.10.30) or 'YYYY-MM-DDTHH:MM:SS' (ex. 2006-01-02T15:04:05)
  3941  
  3942  SEE ALSO
  3943  
  3944  -   rclone - Show help for rclone commands, flags and backends.
  3945  
  3946  Auto generated by spf13/cobra on 15-Jun-2019
  3947  
  3948  
  3949  rclone tree
  3950  
  3951  List the contents of the remote in a tree like fashion.
  3952  
  3953  Synopsis
  3954  
  3955  rclone tree lists the contents of a remote in a similar way to the unix
  3956  tree command.
  3957  
  3958  For example
  3959  
  3960      $ rclone tree remote:path
  3961      /
  3962      ├── file1
  3963      ├── file2
  3964      ├── file3
  3965      └── subdir
  3966          ├── file4
  3967          └── file5
  3968  
  3969      1 directories, 5 files
  3970  
  3971  You can use any of the filtering options with the tree command (eg
  3972  –include and –exclude). You can also use –fast-list.
  3973  
  3974  The tree command has many options for controlling the listing which are
  3975  compatible with the tree command. Note that not all of them have short
  3976  options as they conflict with rclone’s short options.
  3977  
  3978      rclone tree remote:path [flags]
  3979  
  3980  Options
  3981  
  3982        -a, --all             All files are listed (list . files too).
  3983        -C, --color           Turn colorization on always.
  3984        -d, --dirs-only       List directories only.
  3985            --dirsfirst       List directories before files (-U disables).
  3986            --full-path       Print the full path prefix for each file.
  3987        -h, --help            help for tree
  3988            --human           Print the size in a more human readable way.
  3989            --level int       Descend only level directories deep.
  3990        -D, --modtime         Print the date of last modification.
  3991        -i, --noindent        Don't print indentation lines.
  3992            --noreport        Turn off file/directory count at end of tree listing.
  3993        -o, --output string   Output to file instead of stdout.
  3994        -p, --protections     Print the protections for each file.
  3995        -Q, --quote           Quote filenames with double quotes.
  3996        -s, --size            Print the size in bytes of each file.
  3997            --sort string     Select sort: name,version,size,mtime,ctime.
  3998            --sort-ctime      Sort files by last status change time.
  3999        -t, --sort-modtime    Sort files by last modification time.
  4000        -r, --sort-reverse    Reverse the order of the sort.
  4001        -U, --unsorted        Leave files unsorted.
  4002            --version         Sort files alphanumerically by version.
  4003  
  4004  SEE ALSO
  4005  
  4006  -   rclone - Show help for rclone commands, flags and backends.
  4007  
  4008  Auto generated by spf13/cobra on 15-Jun-2019
  4009  
  4010  
  4011  Copying single files
  4012  
  4013  rclone normally syncs or copies directories. However, if the source
  4014  remote points to a file, rclone will just copy that file. The
  4015  destination remote must point to a directory - rclone will give the
  4016  error
  4017  Failed to create file system for "remote:file": is a file not a directory
  4018  if it isn’t.
  4019  
  4020  For example, suppose you have a remote with a file in called test.jpg,
  4021  then you could copy just that file like this
  4022  
  4023      rclone copy remote:test.jpg /tmp/download
  4024  
  4025  The file test.jpg will be placed inside /tmp/download.
  4026  
  4027  This is equivalent to specifying
  4028  
  4029      rclone copy --files-from /tmp/files remote: /tmp/download
  4030  
  4031  Where /tmp/files contains the single line
  4032  
  4033      test.jpg
  4034  
  4035  It is recommended to use copy when copying individual files, not sync.
  4036  They have pretty much the same effect but copy will use a lot less
  4037  memory.
  4038  
  4039  
  4040  Syntax of remote paths
  4041  
  4042  The syntax of the paths passed to the rclone command are as follows.
  4043  
  4044  /path/to/dir
  4045  
  4046  This refers to the local file system.
  4047  
  4048  On Windows only \ may be used instead of / in local paths ONLY, non
  4049  local paths must use /.
  4050  
  4051  These paths needn’t start with a leading / - if they don’t then they
  4052  will be relative to the current directory.
  4053  
  4054  remote:path/to/dir
  4055  
  4056  This refers to a directory path/to/dir on remote: as defined in the
  4057  config file (configured with rclone config).
  4058  
  4059  remote:/path/to/dir
  4060  
  4061  On most backends this is refers to the same directory as
  4062  remote:path/to/dir and that format should be preferred. On a very small
  4063  number of remotes (FTP, SFTP, Dropbox for business) this will refer to a
  4064  different directory. On these, paths without a leading / will refer to
  4065  your “home” directory and paths with a leading / will refer to the root.
  4066  
  4067  :backend:path/to/dir
  4068  
  4069  This is an advanced form for creating remotes on the fly. backend should
  4070  be the name or prefix of a backend (the type in the config file) and all
  4071  the configuration for the backend should be provided on the command line
  4072  (or in environment variables).
  4073  
  4074  Here are some examples:
  4075  
  4076      rclone lsd --http-url https://pub.rclone.org :http:
  4077  
  4078  To list all the directories in the root of https://pub.rclone.org/.
  4079  
  4080      rclone lsf --http-url https://example.com :http:path/to/dir
  4081  
  4082  To list files and directories in https://example.com/path/to/dir/
  4083  
  4084      rclone copy --http-url https://example.com :http:path/to/dir /tmp/dir
  4085  
  4086  To copy files and directories in https://example.com/path/to/dir to
  4087  /tmp/dir.
  4088  
  4089      rclone copy --sftp-host example.com :sftp:path/to/dir /tmp/dir
  4090  
  4091  To copy files and directories from example.com in the relative directory
  4092  path/to/dir to /tmp/dir using sftp.
  4093  
  4094  
  4095  Quoting and the shell
  4096  
  4097  When you are typing commands to your computer you are using something
  4098  called the command line shell. This interprets various characters in an
  4099  OS specific way.
  4100  
  4101  Here are some gotchas which may help users unfamiliar with the shell
  4102  rules
  4103  
  4104  Linux / OSX
  4105  
  4106  If your names have spaces or shell metacharacters (eg *, ?, $, ', " etc)
  4107  then you must quote them. Use single quotes ' by default.
  4108  
  4109      rclone copy 'Important files?' remote:backup
  4110  
  4111  If you want to send a ' you will need to use ", eg
  4112  
  4113      rclone copy "O'Reilly Reviews" remote:backup
  4114  
  4115  The rules for quoting metacharacters are complicated and if you want the
  4116  full details you’ll have to consult the manual page for your shell.
  4117  
  4118  Windows
  4119  
  4120  If your names have spaces in you need to put them in ", eg
  4121  
  4122      rclone copy "E:\folder name\folder name\folder name" remote:backup
  4123  
  4124  If you are using the root directory on its own then don’t quote it (see
  4125  #464 for why), eg
  4126  
  4127      rclone copy E:\ remote:backup
  4128  
  4129  
  4130  Copying files or directories with : in the names
  4131  
  4132  rclone uses : to mark a remote name. This is, however, a valid filename
  4133  component in non-Windows OSes. The remote name parser will only search
  4134  for a : up to the first / so if you need to act on a file or directory
  4135  like this then use the full path starting with a /, or use ./ as a
  4136  current directory prefix.
  4137  
  4138  So to sync a directory called sync:me to a remote called remote: use
  4139  
  4140      rclone sync ./sync:me remote:path
  4141  
  4142  or
  4143  
  4144      rclone sync /full/path/to/sync:me remote:path
  4145  
  4146  
  4147  Server Side Copy
  4148  
  4149  Most remotes (but not all - see the overview) support server side copy.
  4150  
  4151  This means if you want to copy one folder to another then rclone won’t
  4152  download all the files and re-upload them; it will instruct the server
  4153  to copy them in place.
  4154  
  4155  Eg
  4156  
  4157      rclone copy s3:oldbucket s3:newbucket
  4158  
  4159  Will copy the contents of oldbucket to newbucket without downloading and
  4160  re-uploading.
  4161  
  4162  Remotes which don’t support server side copy WILL download and re-upload
  4163  in this case.
  4164  
  4165  Server side copies are used with sync and copy and will be identified in
  4166  the log when using the -v flag. The move command may also use them if
  4167  remote doesn’t support server side move directly. This is done by
  4168  issuing a server side copy then a delete which is much quicker than a
  4169  download and re-upload.
  4170  
  4171  Server side copies will only be attempted if the remote names are the
  4172  same.
  4173  
  4174  This can be used when scripting to make aged backups efficiently, eg
  4175  
  4176      rclone sync remote:current-backup remote:previous-backup
  4177      rclone sync /path/to/files remote:current-backup
  4178  
  4179  
  4180  Options
  4181  
  4182  Rclone has a number of options to control its behaviour.
  4183  
  4184  Options that take parameters can have the values passed in two ways,
  4185  --option=value or --option value. However boolean (true/false) options
  4186  behave slightly differently to the other options in that --boolean sets
  4187  the option to true and the absence of the flag sets it to false. It is
  4188  also possible to specify --boolean=false or --boolean=true. Note that
  4189  --boolean false is not valid - this is parsed as --boolean and the false
  4190  is parsed as an extra command line argument for rclone.
  4191  
  4192  Options which use TIME use the go time parser. A duration string is a
  4193  possibly signed sequence of decimal numbers, each with optional fraction
  4194  and a unit suffix, such as “300ms”, “-1.5h” or “2h45m”. Valid time units
  4195  are “ns”, “us” (or “µs”), “ms”, “s”, “m”, “h”.
  4196  
  4197  Options which use SIZE use kByte by default. However, a suffix of b for
  4198  bytes, k for kBytes, M for MBytes, G for GBytes, T for TBytes and P for
  4199  PBytes may be used. These are the binary units, eg 1, 2**10, 2**20,
  4200  2**30 respectively.
  4201  
  4202  –backup-dir=DIR
  4203  
  4204  When using sync, copy or move any files which would have been
  4205  overwritten or deleted are moved in their original hierarchy into this
  4206  directory.
  4207  
  4208  If --suffix is set, then the moved files will have the suffix added to
  4209  them. If there is a file with the same path (after the suffix has been
  4210  added) in DIR, then it will be overwritten.
  4211  
  4212  The remote in use must support server side move or copy and you must use
  4213  the same remote as the destination of the sync. The backup directory
  4214  must not overlap the destination directory.
  4215  
  4216  For example
  4217  
  4218      rclone sync /path/to/local remote:current --backup-dir remote:old
  4219  
  4220  will sync /path/to/local to remote:current, but for any files which
  4221  would have been updated or deleted will be stored in remote:old.
  4222  
  4223  If running rclone from a script you might want to use today’s date as
  4224  the directory name passed to --backup-dir to store the old files, or you
  4225  might want to pass --suffix with today’s date.
  4226  
  4227  –bind string
  4228  
  4229  Local address to bind to for outgoing connections. This can be an IPv4
  4230  address (1.2.3.4), an IPv6 address (1234::789A) or host name. If the
  4231  host name doesn’t resolve or resolves to more than one IP address it
  4232  will give an error.
  4233  
  4234  –bwlimit=BANDWIDTH_SPEC
  4235  
  4236  This option controls the bandwidth limit. Limits can be specified in two
  4237  ways: As a single limit, or as a timetable.
  4238  
  4239  Single limits last for the duration of the session. To use a single
  4240  limit, specify the desired bandwidth in kBytes/s, or use a suffix
  4241  b|k|M|G. The default is 0 which means to not limit bandwidth.
  4242  
  4243  For example, to limit bandwidth usage to 10 MBytes/s use --bwlimit 10M
  4244  
  4245  It is also possible to specify a “timetable” of limits, which will cause
  4246  certain limits to be applied at certain times. To specify a timetable,
  4247  format your entries as “WEEKDAY-HH:MM,BANDWIDTH
  4248  WEEKDAY-HH:MM,BANDWIDTH…” where: WEEKDAY is optional element. It could
  4249  be written as whole world or only using 3 first characters. HH:MM is an
  4250  hour from 00:00 to 23:59.
  4251  
  4252  An example of a typical timetable to avoid link saturation during
  4253  daytime working hours could be:
  4254  
  4255  --bwlimit "08:00,512 12:00,10M 13:00,512 18:00,30M 23:00,off"
  4256  
  4257  In this example, the transfer bandwidth will be every day set to
  4258  512kBytes/sec at 8am. At noon, it will raise to 10Mbytes/s, and drop
  4259  back to 512kBytes/sec at 1pm. At 6pm, the bandwidth limit will be set to
  4260  30MBytes/s, and at 11pm it will be completely disabled (full speed).
  4261  Anything between 11pm and 8am will remain unlimited.
  4262  
  4263  An example of timetable with WEEKDAY could be:
  4264  
  4265  --bwlimit "Mon-00:00,512 Fri-23:59,10M Sat-10:00,1M Sun-20:00,off"
  4266  
  4267  It mean that, the transfer bandwidth will be set to 512kBytes/sec on
  4268  Monday. It will raise to 10Mbytes/s before the end of Friday. At 10:00
  4269  on Sunday it will be set to 1Mbyte/s. From 20:00 at Sunday will be
  4270  unlimited.
  4271  
  4272  Timeslots without weekday are extended to whole week. So this one
  4273  example:
  4274  
  4275  --bwlimit "Mon-00:00,512 12:00,1M Sun-20:00,off"
  4276  
  4277  Is equal to this:
  4278  
  4279  --bwlimit "Mon-00:00,512Mon-12:00,1M Tue-12:00,1M Wed-12:00,1M Thu-12:00,1M Fri-12:00,1M Sat-12:00,1M Sun-12:00,1M Sun-20:00,off"
  4280  
  4281  Bandwidth limits only apply to the data transfer. They don’t apply to
  4282  the bandwidth of the directory listings etc.
  4283  
  4284  Note that the units are Bytes/s, not Bits/s. Typically connections are
  4285  measured in Bits/s - to convert divide by 8. For example, let’s say you
  4286  have a 10 Mbit/s connection and you wish rclone to use half of it - 5
  4287  Mbit/s. This is 5/8 = 0.625MByte/s so you would use a --bwlimit 0.625M
  4288  parameter for rclone.
  4289  
  4290  On Unix systems (Linux, MacOS, …) the bandwidth limiter can be toggled
  4291  by sending a SIGUSR2 signal to rclone. This allows to remove the
  4292  limitations of a long running rclone transfer and to restore it back to
  4293  the value specified with --bwlimit quickly when needed. Assuming there
  4294  is only one rclone instance running, you can toggle the limiter like
  4295  this:
  4296  
  4297      kill -SIGUSR2 $(pidof rclone)
  4298  
  4299  If you configure rclone with a remote control then you can use change
  4300  the bwlimit dynamically:
  4301  
  4302      rclone rc core/bwlimit rate=1M
  4303  
  4304  –buffer-size=SIZE
  4305  
  4306  Use this sized buffer to speed up file transfers. Each --transfer will
  4307  use this much memory for buffering.
  4308  
  4309  When using mount or cmount each open file descriptor will use this much
  4310  memory for buffering. See the mount documentation for more details.
  4311  
  4312  Set to 0 to disable the buffering for the minimum memory usage.
  4313  
  4314  Note that the memory allocation of the buffers is influenced by the
  4315  –use-mmap flag.
  4316  
  4317  –checkers=N
  4318  
  4319  The number of checkers to run in parallel. Checkers do the equality
  4320  checking of files during a sync. For some storage systems (eg S3, Swift,
  4321  Dropbox) this can take a significant amount of time so they are run in
  4322  parallel.
  4323  
  4324  The default is to run 8 checkers in parallel.
  4325  
  4326  -c, –checksum
  4327  
  4328  Normally rclone will look at modification time and size of files to see
  4329  if they are equal. If you set this flag then rclone will check the file
  4330  hash and size to determine if files are equal.
  4331  
  4332  This is useful when the remote doesn’t support setting modified time and
  4333  a more accurate sync is desired than just checking the file size.
  4334  
  4335  This is very useful when transferring between remotes which store the
  4336  same hash type on the object, eg Drive and Swift. For details of which
  4337  remotes support which hash type see the table in the overview section.
  4338  
  4339  Eg rclone --checksum sync s3:/bucket swift:/bucket would run much
  4340  quicker than without the --checksum flag.
  4341  
  4342  When using this flag, rclone won’t update mtimes of remote files if they
  4343  are incorrect as it would normally.
  4344  
  4345  –config=CONFIG_FILE
  4346  
  4347  Specify the location of the rclone config file.
  4348  
  4349  Normally the config file is in your home directory as a file called
  4350  .config/rclone/rclone.conf (or .rclone.conf if created with an older
  4351  version). If $XDG_CONFIG_HOME is set it will be at
  4352  $XDG_CONFIG_HOME/rclone/rclone.conf.
  4353  
  4354  If there is a file rclone.conf in the same directory as the rclone
  4355  executable it will be preferred. This file must be created manually for
  4356  Rclone to use it, it will never be created automatically.
  4357  
  4358  If you run rclone config file you will see where the default location is
  4359  for you.
  4360  
  4361  Use this flag to override the config location, eg
  4362  rclone --config=".myconfig" .config.
  4363  
  4364  –contimeout=TIME
  4365  
  4366  Set the connection timeout. This should be in go time format which looks
  4367  like 5s for 5 seconds, 10m for 10 minutes, or 3h30m.
  4368  
  4369  The connection timeout is the amount of time rclone will wait for a
  4370  connection to go through to a remote object storage system. It is 1m by
  4371  default.
  4372  
  4373  –dedupe-mode MODE
  4374  
  4375  Mode to run dedupe command in. One of interactive, skip, first, newest,
  4376  oldest, rename. The default is interactive. See the dedupe command for
  4377  more information as to what these options mean.
  4378  
  4379  –disable FEATURE,FEATURE,…
  4380  
  4381  This disables a comma separated list of optional features. For example
  4382  to disable server side move and server side copy use:
  4383  
  4384      --disable move,copy
  4385  
  4386  The features can be put in in any case.
  4387  
  4388  To see a list of which features can be disabled use:
  4389  
  4390      --disable help
  4391  
  4392  See the overview features and optional features to get an idea of which
  4393  feature does what.
  4394  
  4395  This flag can be useful for debugging and in exceptional circumstances
  4396  (eg Google Drive limiting the total volume of Server Side Copies to
  4397  100GB/day).
  4398  
  4399  -n, –dry-run
  4400  
  4401  Do a trial run with no permanent changes. Use this to see what rclone
  4402  would do without actually doing it. Useful when setting up the sync
  4403  command which deletes files in the destination.
  4404  
  4405  –ignore-case-sync
  4406  
  4407  Using this option will cause rclone to ignore the case of the files when
  4408  synchronizing so files will not be copied/synced when the existing
  4409  filenames are the same, even if the casing is different.
  4410  
  4411  –ignore-checksum
  4412  
  4413  Normally rclone will check that the checksums of transferred files
  4414  match, and give an error “corrupted on transfer” if they don’t.
  4415  
  4416  You can use this option to skip that check. You should only use it if
  4417  you have had the “corrupted on transfer” error message and you are sure
  4418  you might want to transfer potentially corrupted data.
  4419  
  4420  –ignore-existing
  4421  
  4422  Using this option will make rclone unconditionally skip all files that
  4423  exist on the destination, no matter the content of these files.
  4424  
  4425  While this isn’t a generally recommended option, it can be useful in
  4426  cases where your files change due to encryption. However, it cannot
  4427  correct partial transfers in case a transfer was interrupted.
  4428  
  4429  –ignore-size
  4430  
  4431  Normally rclone will look at modification time and size of files to see
  4432  if they are equal. If you set this flag then rclone will check only the
  4433  modification time. If --checksum is set then it only checks the
  4434  checksum.
  4435  
  4436  It will also cause rclone to skip verifying the sizes are the same after
  4437  transfer.
  4438  
  4439  This can be useful for transferring files to and from OneDrive which
  4440  occasionally misreports the size of image files (see #399 for more
  4441  info).
  4442  
  4443  -I, –ignore-times
  4444  
  4445  Using this option will cause rclone to unconditionally upload all files
  4446  regardless of the state of files on the destination.
  4447  
  4448  Normally rclone would skip any files that have the same modification
  4449  time and are the same size (or have the same checksum if using
  4450  --checksum).
  4451  
  4452  –immutable
  4453  
  4454  Treat source and destination files as immutable and disallow
  4455  modification.
  4456  
  4457  With this option set, files will be created and deleted as requested,
  4458  but existing files will never be updated. If an existing file does not
  4459  match between the source and destination, rclone will give the error
  4460  Source and destination exist but do not match: immutable file modified.
  4461  
  4462  Note that only commands which transfer files (e.g. sync, copy, move) are
  4463  affected by this behavior, and only modification is disallowed. Files
  4464  may still be deleted explicitly (e.g. delete, purge) or implicitly (e.g.
  4465  sync, move). Use copy --immutable if it is desired to avoid deletion as
  4466  well as modification.
  4467  
  4468  This can be useful as an additional layer of protection for immutable or
  4469  append-only data sets (notably backup archives), where modification
  4470  implies corruption and should not be propagated.
  4471  
  4472  
  4473  –leave-root
  4474  
  4475  During rmdirs it will not remove root directory, even if it’s empty.
  4476  
  4477  –log-file=FILE
  4478  
  4479  Log all of rclone’s output to FILE. This is not active by default. This
  4480  can be useful for tracking down problems with syncs in combination with
  4481  the -v flag. See the Logging section for more info.
  4482  
  4483  Note that if you are using the logrotate program to manage rclone’s
  4484  logs, then you should use the copytruncate option as rclone doesn’t have
  4485  a signal to rotate logs.
  4486  
  4487  –log-format LIST
  4488  
  4489  Comma separated list of log format options. date, time, microseconds,
  4490  longfile, shortfile, UTC. The default is “date,time”.
  4491  
  4492  –log-level LEVEL
  4493  
  4494  This sets the log level for rclone. The default log level is NOTICE.
  4495  
  4496  DEBUG is equivalent to -vv. It outputs lots of debug info - useful for
  4497  bug reports and really finding out what rclone is doing.
  4498  
  4499  INFO is equivalent to -v. It outputs information about each transfer and
  4500  prints stats once a minute by default.
  4501  
  4502  NOTICE is the default log level if no logging flags are supplied. It
  4503  outputs very little when things are working normally. It outputs
  4504  warnings and significant events.
  4505  
  4506  ERROR is equivalent to -q. It only outputs error messages.
  4507  
  4508  –low-level-retries NUMBER
  4509  
  4510  This controls the number of low level retries rclone does.
  4511  
  4512  A low level retry is used to retry a failing operation - typically one
  4513  HTTP request. This might be uploading a chunk of a big file for example.
  4514  You will see low level retries in the log with the -v flag.
  4515  
  4516  This shouldn’t need to be changed from the default in normal operations.
  4517  However, if you get a lot of low level retries you may wish to reduce
  4518  the value so rclone moves on to a high level retry (see the --retries
  4519  flag) quicker.
  4520  
  4521  Disable low level retries with --low-level-retries 1.
  4522  
  4523  –max-backlog=N
  4524  
  4525  This is the maximum allowable backlog of files in a sync/copy/move
  4526  queued for being checked or transferred.
  4527  
  4528  This can be set arbitrarily large. It will only use memory when the
  4529  queue is in use. Note that it will use in the order of N kB of memory
  4530  when the backlog is in use.
  4531  
  4532  Setting this large allows rclone to calculate how many files are pending
  4533  more accurately and give a more accurate estimated finish time.
  4534  
  4535  Setting this small will make rclone more synchronous to the listings of
  4536  the remote which may be desirable.
  4537  
  4538  –max-delete=N
  4539  
  4540  This tells rclone not to delete more than N files. If that limit is
  4541  exceeded then a fatal error will be generated and rclone will stop the
  4542  operation in progress.
  4543  
  4544  –max-depth=N
  4545  
  4546  This modifies the recursion depth for all the commands except purge.
  4547  
  4548  So if you do rclone --max-depth 1 ls remote:path you will see only the
  4549  files in the top level directory. Using --max-depth 2 means you will see
  4550  all the files in first two directory levels and so on.
  4551  
  4552  For historical reasons the lsd command defaults to using a --max-depth
  4553  of 1 - you can override this with the command line flag.
  4554  
  4555  You can use this command to disable recursion (with --max-depth 1).
  4556  
  4557  Note that if you use this with sync and --delete-excluded the files not
  4558  recursed through are considered excluded and will be deleted on the
  4559  destination. Test first with --dry-run if you are not sure what will
  4560  happen.
  4561  
  4562  –max-transfer=SIZE
  4563  
  4564  Rclone will stop transferring when it has reached the size specified.
  4565  Defaults to off.
  4566  
  4567  When the limit is reached all transfers will stop immediately.
  4568  
  4569  Rclone will exit with exit code 8 if the transfer limit is reached.
  4570  
  4571  –modify-window=TIME
  4572  
  4573  When checking whether a file has been modified, this is the maximum
  4574  allowed time difference that a file can have and still be considered
  4575  equivalent.
  4576  
  4577  The default is 1ns unless this is overridden by a remote. For example OS
  4578  X only stores modification times to the nearest second so if you are
  4579  reading and writing to an OS X filing system this will be 1s by default.
  4580  
  4581  This command line flag allows you to override that computed default.
  4582  
  4583  –multi-thread-cutoff=SIZE
  4584  
  4585  When downloading files to the local backend above this size, rclone will
  4586  use multiple threads to download the file. (default 250M)
  4587  
  4588  Rclone preallocates the file (using fallocate(FALLOC_FL_KEEP_SIZE) on
  4589  unix or NTSetInformationFile on Windows both of which takes no time)
  4590  then each thread writes directly into the file at the correct place.
  4591  This means that rclone won’t create fragmented or sparse files and there
  4592  won’t be any assembly time at the end of the transfer.
  4593  
  4594  The number of threads used to dowload is controlled by
  4595  --multi-thread-streams.
  4596  
  4597  Use -vv if you wish to see info about the threads.
  4598  
  4599  This will work with the sync/copy/move commands and friends
  4600  copyto/moveto. Multi thread downloads will be used with rclone mount and
  4601  rclone serve if --vfs-cache-mode is set to writes or above.
  4602  
  4603  NB that this ONLY works for a local destination but will work with any
  4604  source.
  4605  
  4606  –multi-thread-streams=N
  4607  
  4608  When using multi thread downloads (see above --multi-thread-cutoff) this
  4609  sets the maximum number of streams to use. Set to 0 to disable multi
  4610  thread downloads. (Default 4)
  4611  
  4612  Exactly how many streams rclone uses for the download depends on the
  4613  size of the file. To calculate the number of download streams Rclone
  4614  divides the size of the file by the --multi-thread-cutoff and rounds up,
  4615  up to the maximum set with --multi-thread-streams.
  4616  
  4617  So if --multi-thread-cutoff 250MB and --multi-thread-streams 4 are in
  4618  effect (the defaults):
  4619  
  4620  -   0MB.250MB files will be downloaded with 1 stream
  4621  -   250MB..500MB files will be downloaded with 2 streams
  4622  -   500MB..750MB files will be downloaded with 3 streams
  4623  -   750MB+ files will be downloaded with 4 streams
  4624  
  4625  –no-gzip-encoding
  4626  
  4627  Don’t set Accept-Encoding: gzip. This means that rclone won’t ask the
  4628  server for compressed files automatically. Useful if you’ve set the
  4629  server to return files with Content-Encoding: gzip but you uploaded
  4630  compressed files.
  4631  
  4632  There is no need to set this in normal operation, and doing so will
  4633  decrease the network transfer efficiency of rclone.
  4634  
  4635  –no-traverse
  4636  
  4637  The --no-traverse flag controls whether the destination file system is
  4638  traversed when using the copy or move commands. --no-traverse is not
  4639  compatible with sync and will be ignored if you supply it with sync.
  4640  
  4641  If you are only copying a small number of files (or are filtering most
  4642  of the files) and/or have a large number of files on the destination
  4643  then --no-traverse will stop rclone listing the destination and save
  4644  time.
  4645  
  4646  However, if you are copying a large number of files, especially if you
  4647  are doing a copy where lots of the files under consideration haven’t
  4648  changed and won’t need copying then you shouldn’t use --no-traverse.
  4649  
  4650  See rclone copy for an example of how to use it.
  4651  
  4652  –no-update-modtime
  4653  
  4654  When using this flag, rclone won’t update modification times of remote
  4655  files if they are incorrect as it would normally.
  4656  
  4657  This can be used if the remote is being synced with another tool also
  4658  (eg the Google Drive client).
  4659  
  4660  -P, –progress
  4661  
  4662  This flag makes rclone update the stats in a static block in the
  4663  terminal providing a realtime overview of the transfer.
  4664  
  4665  Any log messages will scroll above the static block. Log messages will
  4666  push the static block down to the bottom of the terminal where it will
  4667  stay.
  4668  
  4669  Normally this is updated every 500mS but this period can be overridden
  4670  with the --stats flag.
  4671  
  4672  This can be used with the --stats-one-line flag for a simpler display.
  4673  
  4674  Note: On Windows untilthis bug is fixed all non-ASCII characters will be
  4675  replaced with . when --progress is in use.
  4676  
  4677  -q, –quiet
  4678  
  4679  Normally rclone outputs stats and a completion message. If you set this
  4680  flag it will make as little output as possible.
  4681  
  4682  –retries int
  4683  
  4684  Retry the entire sync if it fails this many times it fails (default 3).
  4685  
  4686  Some remotes can be unreliable and a few retries help pick up the files
  4687  which didn’t get transferred because of errors.
  4688  
  4689  Disable retries with --retries 1.
  4690  
  4691  –retries-sleep=TIME
  4692  
  4693  This sets the interval between each retry specified by --retries
  4694  
  4695  The default is 0. Use 0 to disable.
  4696  
  4697  –size-only
  4698  
  4699  Normally rclone will look at modification time and size of files to see
  4700  if they are equal. If you set this flag then rclone will check only the
  4701  size.
  4702  
  4703  This can be useful transferring files from Dropbox which have been
  4704  modified by the desktop sync client which doesn’t set checksums of
  4705  modification times in the same way as rclone.
  4706  
  4707  –stats=TIME
  4708  
  4709  Commands which transfer data (sync, copy, copyto, move, moveto) will
  4710  print data transfer stats at regular intervals to show their progress.
  4711  
  4712  This sets the interval.
  4713  
  4714  The default is 1m. Use 0 to disable.
  4715  
  4716  If you set the stats interval then all commands can show stats. This can
  4717  be useful when running other commands, check or mount for example.
  4718  
  4719  Stats are logged at INFO level by default which means they won’t show at
  4720  default log level NOTICE. Use --stats-log-level NOTICE or -v to make
  4721  them show. See the Logging section for more info on log levels.
  4722  
  4723  Note that on macOS you can send a SIGINFO (which is normally ctrl-T in
  4724  the terminal) to make the stats print immediately.
  4725  
  4726  –stats-file-name-length integer
  4727  
  4728  By default, the --stats output will truncate file names and paths longer
  4729  than 40 characters. This is equivalent to providing
  4730  --stats-file-name-length 40. Use --stats-file-name-length 0 to disable
  4731  any truncation of file names printed by stats.
  4732  
  4733  –stats-log-level string
  4734  
  4735  Log level to show --stats output at. This can be DEBUG, INFO, NOTICE, or
  4736  ERROR. The default is INFO. This means at the default level of logging
  4737  which is NOTICE the stats won’t show - if you want them to then use
  4738  --stats-log-level NOTICE. See the Logging section for more info on log
  4739  levels.
  4740  
  4741  –stats-one-line
  4742  
  4743  When this is specified, rclone condenses the stats into a single line
  4744  showing the most important stats only.
  4745  
  4746  –stats-one-line-date
  4747  
  4748  When this is specified, rclone enables the single-line stats and
  4749  prepends the display with a date string. The default is
  4750  2006/01/02 15:04:05 -
  4751  
  4752  –stats-one-line-date-format
  4753  
  4754  When this is specified, rclone enables the single-line stats and
  4755  prepends the display with a user-supplied date string. The date string
  4756  MUST be enclosed in quotes. Follow golang specs for date formatting
  4757  syntax.
  4758  
  4759  –stats-unit=bits|bytes
  4760  
  4761  By default, data transfer rates will be printed in bytes/second.
  4762  
  4763  This option allows the data rate to be printed in bits/second.
  4764  
  4765  Data transfer volume will still be reported in bytes.
  4766  
  4767  The rate is reported as a binary unit, not SI unit. So 1 Mbit/s equals
  4768  1,048,576 bits/s and not 1,000,000 bits/s.
  4769  
  4770  The default is bytes.
  4771  
  4772  –suffix=SUFFIX
  4773  
  4774  This is for use with --backup-dir only. If this isn’t set then
  4775  --backup-dir will move files with their original name. If it is set then
  4776  the files will have SUFFIX added on to them.
  4777  
  4778  See --backup-dir for more info.
  4779  
  4780  –suffix-keep-extension
  4781  
  4782  When using --suffix, setting this causes rclone put the SUFFIX before
  4783  the extension of the files that it backs up rather than after.
  4784  
  4785  So let’s say we had --suffix -2019-01-01, without the flag file.txt
  4786  would be backed up to file.txt-2019-01-01 and with the flag it would be
  4787  backed up to file-2019-01-01.txt. This can be helpful to make sure the
  4788  suffixed files can still be opened.
  4789  
  4790  –syslog
  4791  
  4792  On capable OSes (not Windows or Plan9) send all log output to syslog.
  4793  
  4794  This can be useful for running rclone in a script or rclone mount.
  4795  
  4796  –syslog-facility string
  4797  
  4798  If using --syslog this sets the syslog facility (eg KERN, USER). See
  4799  man syslog for a list of possible facilities. The default facility is
  4800  DAEMON.
  4801  
  4802  –tpslimit float
  4803  
  4804  Limit HTTP transactions per second to this. Default is 0 which is used
  4805  to mean unlimited transactions per second.
  4806  
  4807  For example to limit rclone to 10 HTTP transactions per second use
  4808  --tpslimit 10, or to 1 transaction every 2 seconds use --tpslimit 0.5.
  4809  
  4810  Use this when the number of transactions per second from rclone is
  4811  causing a problem with the cloud storage provider (eg getting you banned
  4812  or rate limited).
  4813  
  4814  This can be very useful for rclone mount to control the behaviour of
  4815  applications using it.
  4816  
  4817  See also --tpslimit-burst.
  4818  
  4819  –tpslimit-burst int
  4820  
  4821  Max burst of transactions for --tpslimit. (default 1)
  4822  
  4823  Normally --tpslimit will do exactly the number of transaction per second
  4824  specified. However if you supply --tps-burst then rclone can save up
  4825  some transactions from when it was idle giving a burst of up to the
  4826  parameter supplied.
  4827  
  4828  For example if you provide --tpslimit-burst 10 then if rclone has been
  4829  idle for more than 10*--tpslimit then it can do 10 transactions very
  4830  quickly before they are limited again.
  4831  
  4832  This may be used to increase performance of --tpslimit without changing
  4833  the long term average number of transactions per second.
  4834  
  4835  –track-renames
  4836  
  4837  By default, rclone doesn’t keep track of renamed files, so if you rename
  4838  a file locally then sync it to a remote, rclone will delete the old file
  4839  on the remote and upload a new copy.
  4840  
  4841  If you use this flag, and the remote supports server side copy or server
  4842  side move, and the source and destination have a compatible hash, then
  4843  this will track renames during sync operations and perform renaming
  4844  server-side.
  4845  
  4846  Files will be matched by size and hash - if both match then a rename
  4847  will be considered.
  4848  
  4849  If the destination does not support server-side copy or move, rclone
  4850  will fall back to the default behaviour and log an error level message
  4851  to the console. Note: Encrypted destinations are not supported by
  4852  --track-renames.
  4853  
  4854  Note that --track-renames is incompatible with --no-traverse and that it
  4855  uses extra memory to keep track of all the rename candidates.
  4856  
  4857  Note also that --track-renames is incompatible with --delete-before and
  4858  will select --delete-after instead of --delete-during.
  4859  
  4860  –delete-(before,during,after)
  4861  
  4862  This option allows you to specify when files on your destination are
  4863  deleted when you sync folders.
  4864  
  4865  Specifying the value --delete-before will delete all files present on
  4866  the destination, but not on the source _before_ starting the transfer of
  4867  any new or updated files. This uses two passes through the file systems,
  4868  one for the deletions and one for the copies.
  4869  
  4870  Specifying --delete-during will delete files while checking and
  4871  uploading files. This is the fastest option and uses the least memory.
  4872  
  4873  Specifying --delete-after (the default value) will delay deletion of
  4874  files until all new/updated files have been successfully transferred.
  4875  The files to be deleted are collected in the copy pass then deleted
  4876  after the copy pass has completed successfully. The files to be deleted
  4877  are held in memory so this mode may use more memory. This is the safest
  4878  mode as it will only delete files if there have been no errors
  4879  subsequent to that. If there have been errors before the deletions start
  4880  then you will get the message
  4881  not deleting files as there were IO errors.
  4882  
  4883  –fast-list
  4884  
  4885  When doing anything which involves a directory listing (eg sync, copy,
  4886  ls - in fact nearly every command), rclone normally lists a directory
  4887  and processes it before using more directory lists to process any
  4888  subdirectories. This can be parallelised and works very quickly using
  4889  the least amount of memory.
  4890  
  4891  However, some remotes have a way of listing all files beneath a
  4892  directory in one (or a small number) of transactions. These tend to be
  4893  the bucket based remotes (eg S3, B2, GCS, Swift, Hubic).
  4894  
  4895  If you use the --fast-list flag then rclone will use this method for
  4896  listing directories. This will have the following consequences for the
  4897  listing:
  4898  
  4899  -   It WILL use fewer transactions (important if you pay for them)
  4900  -   It WILL use more memory. Rclone has to load the whole listing into
  4901      memory.
  4902  -   It _may_ be faster because it uses fewer transactions
  4903  -   It _may_ be slower because it can’t be parallelized
  4904  
  4905  rclone should always give identical results with and without
  4906  --fast-list.
  4907  
  4908  If you pay for transactions and can fit your entire sync listing into
  4909  memory then --fast-list is recommended. If you have a very big sync to
  4910  do then don’t use --fast-list otherwise you will run out of memory.
  4911  
  4912  If you use --fast-list on a remote which doesn’t support it, then rclone
  4913  will just ignore it.
  4914  
  4915  –timeout=TIME
  4916  
  4917  This sets the IO idle timeout. If a transfer has started but then
  4918  becomes idle for this long it is considered broken and disconnected.
  4919  
  4920  The default is 5m. Set to 0 to disable.
  4921  
  4922  –transfers=N
  4923  
  4924  The number of file transfers to run in parallel. It can sometimes be
  4925  useful to set this to a smaller number if the remote is giving a lot of
  4926  timeouts or bigger if you have lots of bandwidth and a fast remote.
  4927  
  4928  The default is to run 4 file transfers in parallel.
  4929  
  4930  -u, –update
  4931  
  4932  This forces rclone to skip any files which exist on the destination and
  4933  have a modified time that is newer than the source file.
  4934  
  4935  If an existing destination file has a modification time equal (within
  4936  the computed modify window precision) to the source file’s, it will be
  4937  updated if the sizes are different.
  4938  
  4939  On remotes which don’t support mod time directly the time checked will
  4940  be the uploaded time. This means that if uploading to one of these
  4941  remotes, rclone will skip any files which exist on the destination and
  4942  have an uploaded time that is newer than the modification time of the
  4943  source file.
  4944  
  4945  This can be useful when transferring to a remote which doesn’t support
  4946  mod times directly as it is more accurate than a --size-only check and
  4947  faster than using --checksum.
  4948  
  4949  –use-mmap
  4950  
  4951  If this flag is set then rclone will use anonymous memory allocated by
  4952  mmap on Unix based platforms and VirtualAlloc on Windows for its
  4953  transfer buffers (size controlled by --buffer-size). Memory allocated
  4954  like this does not go on the Go heap and can be returned to the OS
  4955  immediately when it is finished with.
  4956  
  4957  If this flag is not set then rclone will allocate and free the buffers
  4958  using the Go memory allocator which may use more memory as memory pages
  4959  are returned less aggressively to the OS.
  4960  
  4961  It is possible this does not work well on all platforms so it is
  4962  disabled by default; in the future it may be enabled by default.
  4963  
  4964  –use-server-modtime
  4965  
  4966  Some object-store backends (e.g, Swift, S3) do not preserve file
  4967  modification times (modtime). On these backends, rclone stores the
  4968  original modtime as additional metadata on the object. By default it
  4969  will make an API call to retrieve the metadata when the modtime is
  4970  needed by an operation.
  4971  
  4972  Use this flag to disable the extra API call and rely instead on the
  4973  server’s modified time. In cases such as a local to remote sync, knowing
  4974  the local file is newer than the time it was last uploaded to the remote
  4975  is sufficient. In those cases, this flag can speed up the process and
  4976  reduce the number of API calls necessary.
  4977  
  4978  -v, -vv, –verbose
  4979  
  4980  With -v rclone will tell you about each file that is transferred and a
  4981  small number of significant events.
  4982  
  4983  With -vv rclone will become very verbose telling you about every file it
  4984  considers and transfers. Please send bug reports with a log with this
  4985  setting.
  4986  
  4987  -V, –version
  4988  
  4989  Prints the version number
  4990  
  4991  
  4992  SSL/TLS options
  4993  
  4994  The outoing SSL/TLS connections rclone makes can be controlled with
  4995  these options. For example this can be very useful with the HTTP or
  4996  WebDAV backends. Rclone HTTP servers have their own set of configuration
  4997  for SSL/TLS which you can find in their documentation.
  4998  
  4999  –ca-cert string
  5000  
  5001  This loads the PEM encoded certificate authority certificate and uses it
  5002  to verify the certificates of the servers rclone connects to.
  5003  
  5004  If you have generated certificates signed with a local CA then you will
  5005  need this flag to connect to servers using those certificates.
  5006  
  5007  –client-cert string
  5008  
  5009  This loads the PEM encoded client side certificate.
  5010  
  5011  This is used for mutual TLS authentication.
  5012  
  5013  The --client-key flag is required too when using this.
  5014  
  5015  –client-key string
  5016  
  5017  This loads the PEM encoded client side private key used for mutual TLS
  5018  authentication. Used in conjunction with --client-cert.
  5019  
  5020  –no-check-certificate=true/false
  5021  
  5022  --no-check-certificate controls whether a client verifies the server’s
  5023  certificate chain and host name. If --no-check-certificate is true, TLS
  5024  accepts any certificate presented by the server and any host name in
  5025  that certificate. In this mode, TLS is susceptible to man-in-the-middle
  5026  attacks.
  5027  
  5028  This option defaults to false.
  5029  
  5030  THIS SHOULD BE USED ONLY FOR TESTING.
  5031  
  5032  
  5033  Configuration Encryption
  5034  
  5035  Your configuration file contains information for logging in to your
  5036  cloud services. This means that you should keep your .rclone.conf file
  5037  in a secure location.
  5038  
  5039  If you are in an environment where that isn’t possible, you can add a
  5040  password to your configuration. This means that you will have to enter
  5041  the password every time you start rclone.
  5042  
  5043  To add a password to your rclone configuration, execute rclone config.
  5044  
  5045      >rclone config
  5046      Current remotes:
  5047  
  5048      e) Edit existing remote
  5049      n) New remote
  5050      d) Delete remote
  5051      s) Set configuration password
  5052      q) Quit config
  5053      e/n/d/s/q>
  5054  
  5055  Go into s, Set configuration password:
  5056  
  5057      e/n/d/s/q> s
  5058      Your configuration is not encrypted.
  5059      If you add a password, you will protect your login information to cloud services.
  5060      a) Add Password
  5061      q) Quit to main menu
  5062      a/q> a
  5063      Enter NEW configuration password:
  5064      password:
  5065      Confirm NEW password:
  5066      password:
  5067      Password set
  5068      Your configuration is encrypted.
  5069      c) Change Password
  5070      u) Unencrypt configuration
  5071      q) Quit to main menu
  5072      c/u/q>
  5073  
  5074  Your configuration is now encrypted, and every time you start rclone you
  5075  will now be asked for the password. In the same menu, you can change the
  5076  password or completely remove encryption from your configuration.
  5077  
  5078  There is no way to recover the configuration if you lose your password.
  5079  
  5080  rclone uses nacl secretbox which in turn uses XSalsa20 and Poly1305 to
  5081  encrypt and authenticate your configuration with secret-key
  5082  cryptography. The password is SHA-256 hashed, which produces the key for
  5083  secretbox. The hashed password is not stored.
  5084  
  5085  While this provides very good security, we do not recommend storing your
  5086  encrypted rclone configuration in public if it contains sensitive
  5087  information, maybe except if you use a very strong password.
  5088  
  5089  If it is safe in your environment, you can set the RCLONE_CONFIG_PASS
  5090  environment variable to contain your password, in which case it will be
  5091  used for decrypting the configuration.
  5092  
  5093  You can set this for a session from a script. For unix like systems save
  5094  this to a file called set-rclone-password:
  5095  
  5096      #!/bin/echo Source this file don't run it
  5097  
  5098      read -s RCLONE_CONFIG_PASS
  5099      export RCLONE_CONFIG_PASS
  5100  
  5101  Then source the file when you want to use it. From the shell you would
  5102  do source set-rclone-password. It will then ask you for the password and
  5103  set it in the environment variable.
  5104  
  5105  If you are running rclone inside a script, you might want to disable
  5106  password prompts. To do that, pass the parameter --ask-password=false to
  5107  rclone. This will make rclone fail instead of asking for a password if
  5108  RCLONE_CONFIG_PASS doesn’t contain a valid password.
  5109  
  5110  
  5111  Developer options
  5112  
  5113  These options are useful when developing or debugging rclone. There are
  5114  also some more remote specific options which aren’t documented here
  5115  which are used for testing. These start with remote name eg
  5116  --drive-test-option - see the docs for the remote in question.
  5117  
  5118  –cpuprofile=FILE
  5119  
  5120  Write CPU profile to file. This can be analysed with go tool pprof.
  5121  
  5122  –dump flag,flag,flag
  5123  
  5124  The --dump flag takes a comma separated list of flags to dump info
  5125  about. These are:
  5126  
  5127  –dump headers
  5128  
  5129  Dump HTTP headers with Authorization: lines removed. May still contain
  5130  sensitive info. Can be very verbose. Useful for debugging only.
  5131  
  5132  Use --dump auth if you do want the Authorization: headers.
  5133  
  5134  –dump bodies
  5135  
  5136  Dump HTTP headers and bodies - may contain sensitive info. Can be very
  5137  verbose. Useful for debugging only.
  5138  
  5139  Note that the bodies are buffered in memory so don’t use this for
  5140  enormous files.
  5141  
  5142  –dump requests
  5143  
  5144  Like --dump bodies but dumps the request bodies and the response
  5145  headers. Useful for debugging download problems.
  5146  
  5147  –dump responses
  5148  
  5149  Like --dump bodies but dumps the response bodies and the request
  5150  headers. Useful for debugging upload problems.
  5151  
  5152  –dump auth
  5153  
  5154  Dump HTTP headers - will contain sensitive info such as Authorization:
  5155  headers - use --dump headers to dump without Authorization: headers. Can
  5156  be very verbose. Useful for debugging only.
  5157  
  5158  –dump filters
  5159  
  5160  Dump the filters to the output. Useful to see exactly what include and
  5161  exclude options are filtering on.
  5162  
  5163  –dump goroutines
  5164  
  5165  This dumps a list of the running go-routines at the end of the command
  5166  to standard output.
  5167  
  5168  –dump openfiles
  5169  
  5170  This dumps a list of the open files at the end of the command. It uses
  5171  the lsof command to do that so you’ll need that installed to use it.
  5172  
  5173  –memprofile=FILE
  5174  
  5175  Write memory profile to file. This can be analysed with go tool pprof.
  5176  
  5177  
  5178  Filtering
  5179  
  5180  For the filtering options
  5181  
  5182  -   --delete-excluded
  5183  -   --filter
  5184  -   --filter-from
  5185  -   --exclude
  5186  -   --exclude-from
  5187  -   --include
  5188  -   --include-from
  5189  -   --files-from
  5190  -   --min-size
  5191  -   --max-size
  5192  -   --min-age
  5193  -   --max-age
  5194  -   --dump filters
  5195  
  5196  See the filtering section.
  5197  
  5198  
  5199  Remote control
  5200  
  5201  For the remote control options and for instructions on how to remote
  5202  control rclone
  5203  
  5204  -   --rc
  5205  -   and anything starting with --rc-
  5206  
  5207  See the remote control section.
  5208  
  5209  
  5210  Logging
  5211  
  5212  rclone has 4 levels of logging, ERROR, NOTICE, INFO and DEBUG.
  5213  
  5214  By default, rclone logs to standard error. This means you can redirect
  5215  standard error and still see the normal output of rclone commands (eg
  5216  rclone ls).
  5217  
  5218  By default, rclone will produce Error and Notice level messages.
  5219  
  5220  If you use the -q flag, rclone will only produce Error messages.
  5221  
  5222  If you use the -v flag, rclone will produce Error, Notice and Info
  5223  messages.
  5224  
  5225  If you use the -vv flag, rclone will produce Error, Notice, Info and
  5226  Debug messages.
  5227  
  5228  You can also control the log levels with the --log-level flag.
  5229  
  5230  If you use the --log-file=FILE option, rclone will redirect Error, Info
  5231  and Debug messages along with standard error to FILE.
  5232  
  5233  If you use the --syslog flag then rclone will log to syslog and the
  5234  --syslog-facility control which facility it uses.
  5235  
  5236  Rclone prefixes all log messages with their level in capitals, eg INFO
  5237  which makes it easy to grep the log file for different kinds of
  5238  information.
  5239  
  5240  
  5241  Exit Code
  5242  
  5243  If any errors occur during the command execution, rclone will exit with
  5244  a non-zero exit code. This allows scripts to detect when rclone
  5245  operations have failed.
  5246  
  5247  During the startup phase, rclone will exit immediately if an error is
  5248  detected in the configuration. There will always be a log message
  5249  immediately before exiting.
  5250  
  5251  When rclone is running it will accumulate errors as it goes along, and
  5252  only exit with a non-zero exit code if (after retries) there were still
  5253  failed transfers. For every error counted there will be a high priority
  5254  log message (visible with -q) showing the message and which file caused
  5255  the problem. A high priority message is also shown when starting a retry
  5256  so the user can see that any previous error messages may not be valid
  5257  after the retry. If rclone has done a retry it will log a high priority
  5258  message if the retry was successful.
  5259  
  5260  List of exit codes
  5261  
  5262  -   0 - success
  5263  -   1 - Syntax or usage error
  5264  -   2 - Error not otherwise categorised
  5265  -   3 - Directory not found
  5266  -   4 - File not found
  5267  -   5 - Temporary error (one that more retries might fix) (Retry errors)
  5268  -   6 - Less serious errors (like 461 errors from dropbox) (NoRetry
  5269      errors)
  5270  -   7 - Fatal error (one that more retries won’t fix, like account
  5271      suspended) (Fatal errors)
  5272  -   8 - Transfer exceeded - limit set by –max-transfer reached
  5273  
  5274  
  5275  Environment Variables
  5276  
  5277  Rclone can be configured entirely using environment variables. These can
  5278  be used to set defaults for options or config file entries.
  5279  
  5280  Options
  5281  
  5282  Every option in rclone can have its default set by environment variable.
  5283  
  5284  To find the name of the environment variable, first, take the long
  5285  option name, strip the leading --, change - to _, make upper case and
  5286  prepend RCLONE_.
  5287  
  5288  For example, to always set --stats 5s, set the environment variable
  5289  RCLONE_STATS=5s. If you set stats on the command line this will override
  5290  the environment variable setting.
  5291  
  5292  Or to always use the trash in drive --drive-use-trash, set
  5293  RCLONE_DRIVE_USE_TRASH=true.
  5294  
  5295  The same parser is used for the options and the environment variables so
  5296  they take exactly the same form.
  5297  
  5298  Config file
  5299  
  5300  You can set defaults for values in the config file on an individual
  5301  remote basis. If you want to use this feature, you will need to discover
  5302  the name of the config items that you want. The easiest way is to run
  5303  through rclone config by hand, then look in the config file to see what
  5304  the values are (the config file can be found by looking at the help for
  5305  --config in rclone help).
  5306  
  5307  To find the name of the environment variable, you need to set, take
  5308  RCLONE_CONFIG_ + name of remote + _ + name of config file option and
  5309  make it all uppercase.
  5310  
  5311  For example, to configure an S3 remote named mys3: without a config file
  5312  (using unix ways of setting environment variables):
  5313  
  5314      $ export RCLONE_CONFIG_MYS3_TYPE=s3
  5315      $ export RCLONE_CONFIG_MYS3_ACCESS_KEY_ID=XXX
  5316      $ export RCLONE_CONFIG_MYS3_SECRET_ACCESS_KEY=XXX
  5317      $ rclone lsd MYS3:
  5318                -1 2016-09-21 12:54:21        -1 my-bucket
  5319      $ rclone listremotes | grep mys3
  5320      mys3:
  5321  
  5322  Note that if you want to create a remote using environment variables you
  5323  must create the ..._TYPE variable as above.
  5324  
  5325  Other environment variables
  5326  
  5327  -   RCLONE_CONFIG_PASS` set to contain your config file password (see
  5328      Configuration Encryption section)
  5329  -   HTTP_PROXY, HTTPS_PROXY and NO_PROXY (or the lowercase versions
  5330      thereof).
  5331      -   HTTPS_PROXY takes precedence over HTTP_PROXY for https requests.
  5332      -   The environment values may be either a complete URL or a
  5333          “host[:port]” for, in which case the “http” scheme is assumed.
  5334  
  5335  
  5336  
  5337  CONFIGURING RCLONE ON A REMOTE / HEADLESS MACHINE
  5338  
  5339  
  5340  Some of the configurations (those involving oauth2) require an Internet
  5341  connected web browser.
  5342  
  5343  If you are trying to set rclone up on a remote or headless box with no
  5344  browser available on it (eg a NAS or a server in a datacenter) then you
  5345  will need to use an alternative means of configuration. There are two
  5346  ways of doing it, described below.
  5347  
  5348  
  5349  Configuring using rclone authorize
  5350  
  5351  On the headless box
  5352  
  5353      ...
  5354      Remote config
  5355      Use auto config?
  5356       * Say Y if not sure
  5357       * Say N if you are working on a remote or headless machine
  5358      y) Yes
  5359      n) No
  5360      y/n> n
  5361      For this to work, you will need rclone available on a machine that has a web browser available.
  5362      Execute the following on your machine:
  5363          rclone authorize "amazon cloud drive"
  5364      Then paste the result below:
  5365      result>
  5366  
  5367  Then on your main desktop machine
  5368  
  5369      rclone authorize "amazon cloud drive"
  5370      If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
  5371      Log in and authorize rclone for access
  5372      Waiting for code...
  5373      Got code
  5374      Paste the following into your remote machine --->
  5375      SECRET_TOKEN
  5376      <---End paste
  5377  
  5378  Then back to the headless box, paste in the code
  5379  
  5380      result> SECRET_TOKEN
  5381      --------------------
  5382      [acd12]
  5383      client_id = 
  5384      client_secret = 
  5385      token = SECRET_TOKEN
  5386      --------------------
  5387      y) Yes this is OK
  5388      e) Edit this remote
  5389      d) Delete this remote
  5390      y/e/d>
  5391  
  5392  
  5393  Configuring by copying the config file
  5394  
  5395  Rclone stores all of its config in a single configuration file. This can
  5396  easily be copied to configure a remote rclone.
  5397  
  5398  So first configure rclone on your desktop machine
  5399  
  5400      rclone config
  5401  
  5402  to set up the config file.
  5403  
  5404  Find the config file by running rclone config file, for example
  5405  
  5406      $ rclone config file
  5407      Configuration file is stored at:
  5408      /home/user/.rclone.conf
  5409  
  5410  Now transfer it to the remote box (scp, cut paste, ftp, sftp etc) and
  5411  place it in the correct place (use rclone config file on the remote box
  5412  to find out where).
  5413  
  5414  
  5415  
  5416  FILTERING, INCLUDES AND EXCLUDES
  5417  
  5418  
  5419  Rclone has a sophisticated set of include and exclude rules. Some of
  5420  these are based on patterns and some on other things like file size.
  5421  
  5422  The filters are applied for the copy, sync, move, ls, lsl, md5sum,
  5423  sha1sum, size, delete and check operations. Note that purge does not
  5424  obey the filters.
  5425  
  5426  Each path as it passes through rclone is matched against the include and
  5427  exclude rules like --include, --exclude, --include-from, --exclude-from,
  5428  --filter, or --filter-from. The simplest way to try them out is using
  5429  the ls command, or --dry-run together with -v.
  5430  
  5431  
  5432  Patterns
  5433  
  5434  The patterns used to match files for inclusion or exclusion are based on
  5435  “file globs” as used by the unix shell.
  5436  
  5437  If the pattern starts with a / then it only matches at the top level of
  5438  the directory tree, RELATIVE TO THE ROOT OF THE REMOTE (not necessarily
  5439  the root of the local drive). If it doesn’t start with / then it is
  5440  matched starting at the END OF THE PATH, but it will only match a
  5441  complete path element:
  5442  
  5443      file.jpg  - matches "file.jpg"
  5444                - matches "directory/file.jpg"
  5445                - doesn't match "afile.jpg"
  5446                - doesn't match "directory/afile.jpg"
  5447      /file.jpg - matches "file.jpg" in the root directory of the remote
  5448                - doesn't match "afile.jpg"
  5449                - doesn't match "directory/file.jpg"
  5450  
  5451  IMPORTANT Note that you must use / in patterns and not \ even if running
  5452  on Windows.
  5453  
  5454  A * matches anything but not a /.
  5455  
  5456      *.jpg  - matches "file.jpg"
  5457             - matches "directory/file.jpg"
  5458             - doesn't match "file.jpg/something"
  5459  
  5460  Use ** to match anything, including slashes (/).
  5461  
  5462      dir/** - matches "dir/file.jpg"
  5463             - matches "dir/dir1/dir2/file.jpg"
  5464             - doesn't match "directory/file.jpg"
  5465             - doesn't match "adir/file.jpg"
  5466  
  5467  A ? matches any character except a slash /.
  5468  
  5469      l?ss  - matches "less"
  5470            - matches "lass"
  5471            - doesn't match "floss"
  5472  
  5473  A [ and ] together make a character class, such as [a-z] or [aeiou] or
  5474  [[:alpha:]]. See the go regexp docs for more info on these.
  5475  
  5476      h[ae]llo - matches "hello"
  5477               - matches "hallo"
  5478               - doesn't match "hullo"
  5479  
  5480  A { and } define a choice between elements. It should contain a comma
  5481  separated list of patterns, any of which might match. These patterns can
  5482  contain wildcards.
  5483  
  5484      {one,two}_potato - matches "one_potato"
  5485                       - matches "two_potato"
  5486                       - doesn't match "three_potato"
  5487                       - doesn't match "_potato"
  5488  
  5489  Special characters can be escaped with a \ before them.
  5490  
  5491      \*.jpg       - matches "*.jpg"
  5492      \\.jpg       - matches "\.jpg"
  5493      \[one\].jpg  - matches "[one].jpg"
  5494  
  5495  Patterns are case sensitive unless the --ignore-case flag is used.
  5496  
  5497  Without --ignore-case (default)
  5498  
  5499      potato - matches "potato"
  5500             - doesn't match "POTATO"
  5501  
  5502  With --ignore-case
  5503  
  5504      potato - matches "potato"
  5505             - matches "POTATO"
  5506  
  5507  Note also that rclone filter globs can only be used in one of the filter
  5508  command line flags, not in the specification of the remote, so
  5509  rclone copy "remote:dir*.jpg" /path/to/dir won’t work - what is required
  5510  is rclone --include "*.jpg" copy remote:dir /path/to/dir
  5511  
  5512  Directories
  5513  
  5514  Rclone keeps track of directories that could match any file patterns.
  5515  
  5516  Eg if you add the include rule
  5517  
  5518      /a/*.jpg
  5519  
  5520  Rclone will synthesize the directory include rule
  5521  
  5522      /a/
  5523  
  5524  If you put any rules which end in / then it will only match directories.
  5525  
  5526  Directory matches are ONLY used to optimise directory access patterns -
  5527  you must still match the files that you want to match. Directory matches
  5528  won’t optimise anything on bucket based remotes (eg s3, swift, google
  5529  compute storage, b2) which don’t have a concept of directory.
  5530  
  5531  Differences between rsync and rclone patterns
  5532  
  5533  Rclone implements bash style {a,b,c} glob matching which rsync doesn’t.
  5534  
  5535  Rclone always does a wildcard match so \ must always escape a \.
  5536  
  5537  
  5538  How the rules are used
  5539  
  5540  Rclone maintains a combined list of include rules and exclude rules.
  5541  
  5542  Each file is matched in order, starting from the top, against the rule
  5543  in the list until it finds a match. The file is then included or
  5544  excluded according to the rule type.
  5545  
  5546  If the matcher fails to find a match after testing against all the
  5547  entries in the list then the path is included.
  5548  
  5549  For example given the following rules, + being include, - being exclude,
  5550  
  5551      - secret*.jpg
  5552      + *.jpg
  5553      + *.png
  5554      + file2.avi
  5555      - *
  5556  
  5557  This would include
  5558  
  5559  -   file1.jpg
  5560  -   file3.png
  5561  -   file2.avi
  5562  
  5563  This would exclude
  5564  
  5565  -   secret17.jpg
  5566  -   non *.jpg and *.png
  5567  
  5568  A similar process is done on directory entries before recursing into
  5569  them. This only works on remotes which have a concept of directory (Eg
  5570  local, google drive, onedrive, amazon drive) and not on bucket based
  5571  remotes (eg s3, swift, google compute storage, b2).
  5572  
  5573  
  5574  Adding filtering rules
  5575  
  5576  Filtering rules are added with the following command line flags.
  5577  
  5578  Repeating options
  5579  
  5580  You can repeat the following options to add more than one rule of that
  5581  type.
  5582  
  5583  -   --include
  5584  -   --include-from
  5585  -   --exclude
  5586  -   --exclude-from
  5587  -   --filter
  5588  -   --filter-from
  5589  
  5590  IMPORTANT You should not use --include* together with --exclude*. It may
  5591  produce different results than you expected. In that case try to use:
  5592  --filter*.
  5593  
  5594  Note that all the options of the same type are processed together in the
  5595  order above, regardless of what order they were placed on the command
  5596  line.
  5597  
  5598  So all --include options are processed first in the order they appeared
  5599  on the command line, then all --include-from options etc.
  5600  
  5601  To mix up the order includes and excludes, the --filter flag can be
  5602  used.
  5603  
  5604  --exclude - Exclude files matching pattern
  5605  
  5606  Add a single exclude rule with --exclude.
  5607  
  5608  This flag can be repeated. See above for the order the flags are
  5609  processed in.
  5610  
  5611  Eg --exclude *.bak to exclude all bak files from the sync.
  5612  
  5613  --exclude-from - Read exclude patterns from file
  5614  
  5615  Add exclude rules from a file.
  5616  
  5617  This flag can be repeated. See above for the order the flags are
  5618  processed in.
  5619  
  5620  Prepare a file like this exclude-file.txt
  5621  
  5622      # a sample exclude rule file
  5623      *.bak
  5624      file2.jpg
  5625  
  5626  Then use as --exclude-from exclude-file.txt. This will sync all files
  5627  except those ending in bak and file2.jpg.
  5628  
  5629  This is useful if you have a lot of rules.
  5630  
  5631  --include - Include files matching pattern
  5632  
  5633  Add a single include rule with --include.
  5634  
  5635  This flag can be repeated. See above for the order the flags are
  5636  processed in.
  5637  
  5638  Eg --include *.{png,jpg} to include all png and jpg files in the backup
  5639  and no others.
  5640  
  5641  This adds an implicit --exclude * at the very end of the filter list.
  5642  This means you can mix --include and --include-from with the other
  5643  filters (eg --exclude) but you must include all the files you want in
  5644  the include statement. If this doesn’t provide enough flexibility then
  5645  you must use --filter-from.
  5646  
  5647  --include-from - Read include patterns from file
  5648  
  5649  Add include rules from a file.
  5650  
  5651  This flag can be repeated. See above for the order the flags are
  5652  processed in.
  5653  
  5654  Prepare a file like this include-file.txt
  5655  
  5656      # a sample include rule file
  5657      *.jpg
  5658      *.png
  5659      file2.avi
  5660  
  5661  Then use as --include-from include-file.txt. This will sync all jpg, png
  5662  files and file2.avi.
  5663  
  5664  This is useful if you have a lot of rules.
  5665  
  5666  This adds an implicit --exclude * at the very end of the filter list.
  5667  This means you can mix --include and --include-from with the other
  5668  filters (eg --exclude) but you must include all the files you want in
  5669  the include statement. If this doesn’t provide enough flexibility then
  5670  you must use --filter-from.
  5671  
  5672  --filter - Add a file-filtering rule
  5673  
  5674  This can be used to add a single include or exclude rule. Include rules
  5675  start with + and exclude rules start with -. A special rule called ! can
  5676  be used to clear the existing rules.
  5677  
  5678  This flag can be repeated. See above for the order the flags are
  5679  processed in.
  5680  
  5681  Eg --filter "- *.bak" to exclude all bak files from the sync.
  5682  
  5683  --filter-from - Read filtering patterns from a file
  5684  
  5685  Add include/exclude rules from a file.
  5686  
  5687  This flag can be repeated. See above for the order the flags are
  5688  processed in.
  5689  
  5690  Prepare a file like this filter-file.txt
  5691  
  5692      # a sample filter rule file
  5693      - secret*.jpg
  5694      + *.jpg
  5695      + *.png
  5696      + file2.avi
  5697      - /dir/Trash/**
  5698      + /dir/**
  5699      # exclude everything else
  5700      - *
  5701  
  5702  Then use as --filter-from filter-file.txt. The rules are processed in
  5703  the order that they are defined.
  5704  
  5705  This example will include all jpg and png files, exclude any files
  5706  matching secret*.jpg and include file2.avi. It will also include
  5707  everything in the directory dir at the root of the sync, except
  5708  dir/Trash which it will exclude. Everything else will be excluded from
  5709  the sync.
  5710  
  5711  --files-from - Read list of source-file names
  5712  
  5713  This reads a list of file names from the file passed in and ONLY these
  5714  files are transferred. The FILTERING RULES ARE IGNORED completely if you
  5715  use this option.
  5716  
  5717  Rclone will traverse the file system if you use --files-from,
  5718  effectively using the files in --files-from as a set of filters. Rclone
  5719  will not error if any of the files are missing.
  5720  
  5721  If you use --no-traverse as well as --files-from then rclone will not
  5722  traverse the destination file system, it will find each file
  5723  individually using approximately 1 API call. This can be more efficient
  5724  for small lists of files.
  5725  
  5726  This option can be repeated to read from more than one file. These are
  5727  read in the order that they are placed on the command line.
  5728  
  5729  Paths within the --files-from file will be interpreted as starting with
  5730  the root specified in the command. Leading / characters are ignored.
  5731  
  5732  For example, suppose you had files-from.txt with this content:
  5733  
  5734      # comment
  5735      file1.jpg
  5736      subdir/file2.jpg
  5737  
  5738  You could then use it like this:
  5739  
  5740      rclone copy --files-from files-from.txt /home/me/pics remote:pics
  5741  
  5742  This will transfer these files only (if they exist)
  5743  
  5744      /home/me/pics/file1.jpg        → remote:pics/file1.jpg
  5745      /home/me/pics/subdir/file2.jpg → remote:pics/subdirfile1.jpg
  5746  
  5747  To take a more complicated example, let’s say you had a few files you
  5748  want to back up regularly with these absolute paths:
  5749  
  5750      /home/user1/important
  5751      /home/user1/dir/file
  5752      /home/user2/stuff
  5753  
  5754  To copy these you’d find a common subdirectory - in this case /home and
  5755  put the remaining files in files-from.txt with or without leading /, eg
  5756  
  5757      user1/important
  5758      user1/dir/file
  5759      user2/stuff
  5760  
  5761  You could then copy these to a remote like this
  5762  
  5763      rclone copy --files-from files-from.txt /home remote:backup
  5764  
  5765  The 3 files will arrive in remote:backup with the paths as in the
  5766  files-from.txt like this:
  5767  
  5768      /home/user1/important → remote:backup/user1/important
  5769      /home/user1/dir/file  → remote:backup/user1/dir/file
  5770      /home/user2/stuff     → remote:backup/stuff
  5771  
  5772  You could of course choose / as the root too in which case your
  5773  files-from.txt might look like this.
  5774  
  5775      /home/user1/important
  5776      /home/user1/dir/file
  5777      /home/user2/stuff
  5778  
  5779  And you would transfer it like this
  5780  
  5781      rclone copy --files-from files-from.txt / remote:backup
  5782  
  5783  In this case there will be an extra home directory on the remote:
  5784  
  5785      /home/user1/important → remote:home/backup/user1/important
  5786      /home/user1/dir/file  → remote:home/backup/user1/dir/file
  5787      /home/user2/stuff     → remote:home/backup/stuff
  5788  
  5789  --min-size - Don’t transfer any file smaller than this
  5790  
  5791  This option controls the minimum size file which will be transferred.
  5792  This defaults to kBytes but a suffix of k, M, or G can be used.
  5793  
  5794  For example --min-size 50k means no files smaller than 50kByte will be
  5795  transferred.
  5796  
  5797  --max-size - Don’t transfer any file larger than this
  5798  
  5799  This option controls the maximum size file which will be transferred.
  5800  This defaults to kBytes but a suffix of k, M, or G can be used.
  5801  
  5802  For example --max-size 1G means no files larger than 1GByte will be
  5803  transferred.
  5804  
  5805  --max-age - Don’t transfer any file older than this
  5806  
  5807  This option controls the maximum age of files to transfer. Give in
  5808  seconds or with a suffix of:
  5809  
  5810  -   ms - Milliseconds
  5811  -   s - Seconds
  5812  -   m - Minutes
  5813  -   h - Hours
  5814  -   d - Days
  5815  -   w - Weeks
  5816  -   M - Months
  5817  -   y - Years
  5818  
  5819  For example --max-age 2d means no files older than 2 days will be
  5820  transferred.
  5821  
  5822  --min-age - Don’t transfer any file younger than this
  5823  
  5824  This option controls the minimum age of files to transfer. Give in
  5825  seconds or with a suffix (see --max-age for list of suffixes)
  5826  
  5827  For example --min-age 2d means no files younger than 2 days will be
  5828  transferred.
  5829  
  5830  --delete-excluded - Delete files on dest excluded from sync
  5831  
  5832  IMPORTANT this flag is dangerous - use with --dry-run and -v first.
  5833  
  5834  When doing rclone sync this will delete any files which are excluded
  5835  from the sync on the destination.
  5836  
  5837  If for example you did a sync from A to B without the --min-size 50k
  5838  flag
  5839  
  5840      rclone sync A: B:
  5841  
  5842  Then you repeated it like this with the --delete-excluded
  5843  
  5844      rclone --min-size 50k --delete-excluded sync A: B:
  5845  
  5846  This would delete all files on B which are less than 50 kBytes as these
  5847  are now excluded from the sync.
  5848  
  5849  Always test first with --dry-run and -v before using this flag.
  5850  
  5851  --dump filters - dump the filters to the output
  5852  
  5853  This dumps the defined filters to the output as regular expressions.
  5854  
  5855  Useful for debugging.
  5856  
  5857  --ignore-case - make searches case insensitive
  5858  
  5859  Normally filter patterns are case sensitive. If this flag is supplied
  5860  then filter patterns become case insensitive.
  5861  
  5862  Normally a --include "file.txt" will not match a file called FILE.txt.
  5863  However if you use the --ignore-case flag then --include "file.txt" this
  5864  will match a file called FILE.txt.
  5865  
  5866  
  5867  Quoting shell metacharacters
  5868  
  5869  The examples above may not work verbatim in your shell as they have
  5870  shell metacharacters in them (eg *), and may require quoting.
  5871  
  5872  Eg linux, OSX
  5873  
  5874  -   --include \*.jpg
  5875  -   --include '*.jpg'
  5876  -   --include='*.jpg'
  5877  
  5878  In Windows the expansion is done by the command not the shell so this
  5879  should work fine
  5880  
  5881  -   --include *.jpg
  5882  
  5883  
  5884  Exclude directory based on a file
  5885  
  5886  It is possible to exclude a directory based on a file, which is present
  5887  in this directory. Filename should be specified using the
  5888  --exclude-if-present flag. This flag has a priority over the other
  5889  filtering flags.
  5890  
  5891  Imagine, you have the following directory structure:
  5892  
  5893      dir1/file1
  5894      dir1/dir2/file2
  5895      dir1/dir2/dir3/file3
  5896      dir1/dir2/dir3/.ignore
  5897  
  5898  You can exclude dir3 from sync by running the following command:
  5899  
  5900      rclone sync --exclude-if-present .ignore dir1 remote:backup
  5901  
  5902  Currently only one filename is supported, i.e. --exclude-if-present
  5903  should not be used multiple times.
  5904  
  5905  
  5906  
  5907  REMOTE CONTROLLING RCLONE
  5908  
  5909  
  5910  If rclone is run with the --rc flag then it starts an http server which
  5911  can be used to remote control rclone.
  5912  
  5913  If you just want to run a remote control then see the rcd command.
  5914  
  5915  NB this is experimental and everything here is subject to change!
  5916  
  5917  
  5918  Supported parameters
  5919  
  5920  –rc
  5921  
  5922  Flag to start the http server listen on remote requests
  5923  
  5924  –rc-addr=IP
  5925  
  5926  IPaddress:Port or :Port to bind server to. (default “localhost:5572”)
  5927  
  5928  –rc-cert=KEY
  5929  
  5930  SSL PEM key (concatenation of certificate and CA certificate)
  5931  
  5932  –rc-client-ca=PATH
  5933  
  5934  Client certificate authority to verify clients with
  5935  
  5936  –rc-htpasswd=PATH
  5937  
  5938  htpasswd file - if not provided no authentication is done
  5939  
  5940  –rc-key=PATH
  5941  
  5942  SSL PEM Private key
  5943  
  5944  –rc-max-header-bytes=VALUE
  5945  
  5946  Maximum size of request header (default 4096)
  5947  
  5948  –rc-user=VALUE
  5949  
  5950  User name for authentication.
  5951  
  5952  –rc-pass=VALUE
  5953  
  5954  Password for authentication.
  5955  
  5956  –rc-realm=VALUE
  5957  
  5958  Realm for authentication (default “rclone”)
  5959  
  5960  –rc-server-read-timeout=DURATION
  5961  
  5962  Timeout for server reading data (default 1h0m0s)
  5963  
  5964  –rc-server-write-timeout=DURATION
  5965  
  5966  Timeout for server writing data (default 1h0m0s)
  5967  
  5968  –rc-serve
  5969  
  5970  Enable the serving of remote objects via the HTTP interface. This means
  5971  objects will be accessible at http://127.0.0.1:5572/ by default, so you
  5972  can browse to http://127.0.0.1:5572/ or http://127.0.0.1:5572/* to see a
  5973  listing of the remotes. Objects may be requested from remotes using this
  5974  syntax http://127.0.0.1:5572/[remote:path]/path/to/object
  5975  
  5976  Default Off.
  5977  
  5978  –rc-files /path/to/directory
  5979  
  5980  Path to local files to serve on the HTTP server.
  5981  
  5982  If this is set then rclone will serve the files in that directory. It
  5983  will also open the root in the web browser if specified. This is for
  5984  implementing browser based GUIs for rclone functions.
  5985  
  5986  If --rc-user or --rc-pass is set then the URL that is opened will have
  5987  the authorization in the URL in the http://user:pass@localhost/ style.
  5988  
  5989  Default Off.
  5990  
  5991  –rc-job-expire-duration=DURATION
  5992  
  5993  Expire finished async jobs older than DURATION (default 60s).
  5994  
  5995  –rc-job-expire-interval=DURATION
  5996  
  5997  Interval duration to check for expired async jobs (default 10s).
  5998  
  5999  –rc-no-auth
  6000  
  6001  By default rclone will require authorisation to have been set up on the
  6002  rc interface in order to use any methods which access any rclone
  6003  remotes. Eg operations/list is denied as it involved creating a remote
  6004  as is sync/copy.
  6005  
  6006  If this is set then no authorisation will be required on the server to
  6007  use these methods. The alternative is to use --rc-user and --rc-pass and
  6008  use these credentials in the request.
  6009  
  6010  Default Off.
  6011  
  6012  
  6013  Accessing the remote control via the rclone rc command
  6014  
  6015  Rclone itself implements the remote control protocol in its rclone rc
  6016  command.
  6017  
  6018  You can use it like this
  6019  
  6020      $ rclone rc rc/noop param1=one param2=two
  6021      {
  6022          "param1": "one",
  6023          "param2": "two"
  6024      }
  6025  
  6026  Run rclone rc on its own to see the help for the installed remote
  6027  control commands.
  6028  
  6029  rclone rc also supports a --json flag which can be used to send more
  6030  complicated input parameters.
  6031  
  6032      $ rclone rc --json '{ "p1": [1,"2",null,4], "p2": { "a":1, "b":2 } }' rc/noop
  6033      {
  6034          "p1": [
  6035              1,
  6036              "2",
  6037              null,
  6038              4
  6039          ],
  6040          "p2": {
  6041              "a": 1,
  6042              "b": 2
  6043          }
  6044      }
  6045  
  6046  
  6047  Special parameters
  6048  
  6049  The rc interface supports some special parameters which apply to ALL
  6050  commands. These start with _ to show they are different.
  6051  
  6052  Running asynchronous jobs with _async = true
  6053  
  6054  If _async has a true value when supplied to an rc call then it will
  6055  return immediately with a job id and the task will be run in the
  6056  background. The job/status call can be used to get information of the
  6057  background job. The job can be queried for up to 1 minute after it has
  6058  finished.
  6059  
  6060  It is recommended that potentially long running jobs, eg sync/sync,
  6061  sync/copy, sync/move, operations/purge are run with the _async flag to
  6062  avoid any potential problems with the HTTP request and response timing
  6063  out.
  6064  
  6065  Starting a job with the _async flag:
  6066  
  6067      $ rclone rc --json '{ "p1": [1,"2",null,4], "p2": { "a":1, "b":2 }, "_async": true }' rc/noop
  6068      {
  6069          "jobid": 2
  6070      }
  6071  
  6072  Query the status to see if the job has finished. For more information on
  6073  the meaning of these return parameters see the job/status call.
  6074  
  6075      $ rclone rc --json '{ "jobid":2 }' job/status
  6076      {
  6077          "duration": 0.000124163,
  6078          "endTime": "2018-10-27T11:38:07.911245881+01:00",
  6079          "error": "",
  6080          "finished": true,
  6081          "id": 2,
  6082          "output": {
  6083              "_async": true,
  6084              "p1": [
  6085                  1,
  6086                  "2",
  6087                  null,
  6088                  4
  6089              ],
  6090              "p2": {
  6091                  "a": 1,
  6092                  "b": 2
  6093              }
  6094          },
  6095          "startTime": "2018-10-27T11:38:07.911121728+01:00",
  6096          "success": true
  6097      }
  6098  
  6099  job/list can be used to show the running or recently completed jobs
  6100  
  6101      $ rclone rc job/list
  6102      {
  6103          "jobids": [
  6104              2
  6105          ]
  6106      }
  6107  
  6108  
  6109  Supported commands
  6110  
  6111  cache/expire: Purge a remote from cache
  6112  
  6113  Purge a remote from the cache backend. Supports either a directory or a
  6114  file. Params: - remote = path to remote (required) - withData =
  6115  true/false to delete cached data (chunks) as well (optional)
  6116  
  6117  Eg
  6118  
  6119      rclone rc cache/expire remote=path/to/sub/folder/
  6120      rclone rc cache/expire remote=/ withData=true
  6121  
  6122  cache/fetch: Fetch file chunks
  6123  
  6124  Ensure the specified file chunks are cached on disk.
  6125  
  6126  The chunks= parameter specifies the file chunks to check. It takes a
  6127  comma separated list of array slice indices. The slice indices are
  6128  similar to Python slices: start[:end]
  6129  
  6130  start is the 0 based chunk number from the beginning of the file to
  6131  fetch inclusive. end is 0 based chunk number from the beginning of the
  6132  file to fetch exclusive. Both values can be negative, in which case they
  6133  count from the back of the file. The value “-5:” represents the last 5
  6134  chunks of a file.
  6135  
  6136  Some valid examples are: “:5,-5:” -> the first and last five chunks
  6137  “0,-2” -> the first and the second last chunk “0:10” -> the first ten
  6138  chunks
  6139  
  6140  Any parameter with a key that starts with “file” can be used to specify
  6141  files to fetch, eg
  6142  
  6143      rclone rc cache/fetch chunks=0 file=hello file2=home/goodbye
  6144  
  6145  File names will automatically be encrypted when the a crypt remote is
  6146  used on top of the cache.
  6147  
  6148  cache/stats: Get cache stats
  6149  
  6150  Show statistics for the cache remote.
  6151  
  6152  config/create: create the config for a remote.
  6153  
  6154  This takes the following parameters
  6155  
  6156  -   name - name of remote
  6157  -   type - type of the new remote
  6158  
  6159  See the config create command command for more information on the above.
  6160  
  6161  Authentication is required for this call.
  6162  
  6163  config/delete: Delete a remote in the config file.
  6164  
  6165  Parameters: - name - name of remote to delete
  6166  
  6167  See the config delete command command for more information on the above.
  6168  
  6169  Authentication is required for this call.
  6170  
  6171  config/dump: Dumps the config file.
  6172  
  6173  Returns a JSON object: - key: value
  6174  
  6175  Where keys are remote names and values are the config parameters.
  6176  
  6177  See the config dump command command for more information on the above.
  6178  
  6179  Authentication is required for this call.
  6180  
  6181  config/get: Get a remote in the config file.
  6182  
  6183  Parameters: - name - name of remote to get
  6184  
  6185  See the config dump command command for more information on the above.
  6186  
  6187  Authentication is required for this call.
  6188  
  6189  config/listremotes: Lists the remotes in the config file.
  6190  
  6191  Returns - remotes - array of remote names
  6192  
  6193  See the listremotes command command for more information on the above.
  6194  
  6195  Authentication is required for this call.
  6196  
  6197  config/password: password the config for a remote.
  6198  
  6199  This takes the following parameters
  6200  
  6201  -   name - name of remote
  6202  
  6203  See the config password command command for more information on the
  6204  above.
  6205  
  6206  Authentication is required for this call.
  6207  
  6208  config/providers: Shows how providers are configured in the config file.
  6209  
  6210  Returns a JSON object: - providers - array of objects
  6211  
  6212  See the config providers command command for more information on the
  6213  above.
  6214  
  6215  Authentication is required for this call.
  6216  
  6217  config/update: update the config for a remote.
  6218  
  6219  This takes the following parameters
  6220  
  6221  -   name - name of remote
  6222  
  6223  See the config update command command for more information on the above.
  6224  
  6225  Authentication is required for this call.
  6226  
  6227  core/bwlimit: Set the bandwidth limit.
  6228  
  6229  This sets the bandwidth limit to that passed in.
  6230  
  6231  Eg
  6232  
  6233      rclone rc core/bwlimit rate=1M
  6234      rclone rc core/bwlimit rate=off
  6235  
  6236  The format of the parameter is exactly the same as passed to –bwlimit
  6237  except only one bandwidth may be specified.
  6238  
  6239  core/gc: Runs a garbage collection.
  6240  
  6241  This tells the go runtime to do a garbage collection run. It isn’t
  6242  necessary to call this normally, but it can be useful for debugging
  6243  memory problems.
  6244  
  6245  core/memstats: Returns the memory statistics
  6246  
  6247  This returns the memory statistics of the running program. What the
  6248  values mean are explained in the go docs:
  6249  https://golang.org/pkg/runtime/#MemStats
  6250  
  6251  The most interesting values for most people are:
  6252  
  6253  -   HeapAlloc: This is the amount of memory rclone is actually using
  6254  -   HeapSys: This is the amount of memory rclone has obtained from the
  6255      OS
  6256  -   Sys: this is the total amount of memory requested from the OS
  6257      -   It is virtual memory so may include unused memory
  6258  
  6259  core/obscure: Obscures a string passed in.
  6260  
  6261  Pass a clear string and rclone will obscure it for the config file: -
  6262  clear - string
  6263  
  6264  Returns - obscured - string
  6265  
  6266  core/pid: Return PID of current process
  6267  
  6268  This returns PID of current process. Useful for stopping rclone process.
  6269  
  6270  core/stats: Returns stats about current transfers.
  6271  
  6272  This returns all available stats
  6273  
  6274      rclone rc core/stats
  6275  
  6276  Returns the following values:
  6277  
  6278      {
  6279          "speed": average speed in bytes/sec since start of the process,
  6280          "bytes": total transferred bytes since the start of the process,
  6281          "errors": number of errors,
  6282          "fatalError": whether there has been at least one FatalError,
  6283          "retryError": whether there has been at least one non-NoRetryError,
  6284          "checks": number of checked files,
  6285          "transfers": number of transferred files,
  6286          "deletes" : number of deleted files,
  6287          "elapsedTime": time in seconds since the start of the process,
  6288          "lastError": last occurred error,
  6289          "transferring": an array of currently active file transfers:
  6290              [
  6291                  {
  6292                      "bytes": total transferred bytes for this file,
  6293                      "eta": estimated time in seconds until file transfer completion
  6294                      "name": name of the file,
  6295                      "percentage": progress of the file transfer in percent,
  6296                      "speed": speed in bytes/sec,
  6297                      "speedAvg": speed in bytes/sec as an exponentially weighted moving average,
  6298                      "size": size of the file in bytes
  6299                  }
  6300              ],
  6301          "checking": an array of names of currently active file checks
  6302              []
  6303      }
  6304  
  6305  Values for “transferring”, “checking” and “lastError” are only assigned
  6306  if data is available. The value for “eta” is null if an eta cannot be
  6307  determined.
  6308  
  6309  core/version: Shows the current version of rclone and the go runtime.
  6310  
  6311  This shows the current version of go and the go runtime - version -
  6312  rclone version, eg “v1.44” - decomposed - version number as [major,
  6313  minor, patch, subpatch] - note patch and subpatch will be 999 for a git
  6314  compiled version - isGit - boolean - true if this was compiled from the
  6315  git version - os - OS in use as according to Go - arch - cpu
  6316  architecture in use according to Go - goVersion - version of Go runtime
  6317  in use
  6318  
  6319  job/list: Lists the IDs of the running jobs
  6320  
  6321  Parameters - None
  6322  
  6323  Results - jobids - array of integer job ids
  6324  
  6325  job/status: Reads the status of the job ID
  6326  
  6327  Parameters - jobid - id of the job (integer)
  6328  
  6329  Results - finished - boolean - duration - time in seconds that the job
  6330  ran for - endTime - time the job finished (eg
  6331  “2018-10-26T18:50:20.528746884+01:00”) - error - error from the job or
  6332  empty string for no error - finished - boolean whether the job has
  6333  finished or not - id - as passed in above - startTime - time the job
  6334  started (eg “2018-10-26T18:50:20.528336039+01:00”) - success - boolean -
  6335  true for success false otherwise - output - output of the job as would
  6336  have been returned if called synchronously
  6337  
  6338  operations/about: Return the space used on the remote
  6339  
  6340  This takes the following parameters
  6341  
  6342  -   fs - a remote name string eg “drive:”
  6343  
  6344  The result is as returned from rclone about –json
  6345  
  6346  See the about command command for more information on the above.
  6347  
  6348  Authentication is required for this call.
  6349  
  6350  operations/cleanup: Remove trashed files in the remote or path
  6351  
  6352  This takes the following parameters
  6353  
  6354  -   fs - a remote name string eg “drive:”
  6355  
  6356  See the cleanup command command for more information on the above.
  6357  
  6358  Authentication is required for this call.
  6359  
  6360  operations/copyfile: Copy a file from source remote to destination remote
  6361  
  6362  This takes the following parameters
  6363  
  6364  -   srcFs - a remote name string eg “drive:” for the source
  6365  -   srcRemote - a path within that remote eg “file.txt” for the source
  6366  -   dstFs - a remote name string eg “drive2:” for the destination
  6367  -   dstRemote - a path within that remote eg “file2.txt” for the
  6368      destination
  6369  
  6370  Authentication is required for this call.
  6371  
  6372  operations/copyurl: Copy the URL to the object
  6373  
  6374  This takes the following parameters
  6375  
  6376  -   fs - a remote name string eg “drive:”
  6377  -   remote - a path within that remote eg “dir”
  6378  -   url - string, URL to read from
  6379  
  6380  See the copyurl command command for more information on the above.
  6381  
  6382  Authentication is required for this call.
  6383  
  6384  operations/delete: Remove files in the path
  6385  
  6386  This takes the following parameters
  6387  
  6388  -   fs - a remote name string eg “drive:”
  6389  
  6390  See the delete command command for more information on the above.
  6391  
  6392  Authentication is required for this call.
  6393  
  6394  operations/deletefile: Remove the single file pointed to
  6395  
  6396  This takes the following parameters
  6397  
  6398  -   fs - a remote name string eg “drive:”
  6399  -   remote - a path within that remote eg “dir”
  6400  
  6401  See the deletefile command command for more information on the above.
  6402  
  6403  Authentication is required for this call.
  6404  
  6405  operations/fsinfo: Return information about the remote
  6406  
  6407  This takes the following parameters
  6408  
  6409  -   fs - a remote name string eg “drive:”
  6410  
  6411  This returns info about the remote passed in;
  6412  
  6413      {
  6414          // optional features and whether they are available or not
  6415          "Features": {
  6416              "About": true,
  6417              "BucketBased": false,
  6418              "CanHaveEmptyDirectories": true,
  6419              "CaseInsensitive": false,
  6420              "ChangeNotify": false,
  6421              "CleanUp": false,
  6422              "Copy": false,
  6423              "DirCacheFlush": false,
  6424              "DirMove": true,
  6425              "DuplicateFiles": false,
  6426              "GetTier": false,
  6427              "ListR": false,
  6428              "MergeDirs": false,
  6429              "Move": true,
  6430              "OpenWriterAt": true,
  6431              "PublicLink": false,
  6432              "Purge": true,
  6433              "PutStream": true,
  6434              "PutUnchecked": false,
  6435              "ReadMimeType": false,
  6436              "ServerSideAcrossConfigs": false,
  6437              "SetTier": false,
  6438              "SetWrapper": false,
  6439              "UnWrap": false,
  6440              "WrapFs": false,
  6441              "WriteMimeType": false
  6442          },
  6443          // Names of hashes available
  6444          "Hashes": [
  6445              "MD5",
  6446              "SHA-1",
  6447              "DropboxHash",
  6448              "QuickXorHash"
  6449          ],
  6450          "Name": "local",    // Name as created
  6451          "Precision": 1,     // Precision of timestamps in ns
  6452          "Root": "/",        // Path as created
  6453          "String": "Local file system at /" // how the remote will appear in logs
  6454      }
  6455  
  6456  This command does not have a command line equivalent so use this
  6457  instead:
  6458  
  6459      rclone rc --loopback operations/fsinfo fs=remote:
  6460  
  6461  operations/list: List the given remote and path in JSON format
  6462  
  6463  This takes the following parameters
  6464  
  6465  -   fs - a remote name string eg “drive:”
  6466  -   remote - a path within that remote eg “dir”
  6467  -   opt - a dictionary of options to control the listing (optional)
  6468      -   recurse - If set recurse directories
  6469      -   noModTime - If set return modification time
  6470      -   showEncrypted - If set show decrypted names
  6471      -   showOrigIDs - If set show the IDs for each item if known
  6472      -   showHash - If set return a dictionary of hashes
  6473  
  6474  The result is
  6475  
  6476  -   list
  6477      -   This is an array of objects as described in the lsjson command
  6478  
  6479  See the lsjson command for more information on the above and examples.
  6480  
  6481  Authentication is required for this call.
  6482  
  6483  operations/mkdir: Make a destination directory or container
  6484  
  6485  This takes the following parameters
  6486  
  6487  -   fs - a remote name string eg “drive:”
  6488  -   remote - a path within that remote eg “dir”
  6489  
  6490  See the mkdir command command for more information on the above.
  6491  
  6492  Authentication is required for this call.
  6493  
  6494  operations/movefile: Move a file from source remote to destination remote
  6495  
  6496  This takes the following parameters
  6497  
  6498  -   srcFs - a remote name string eg “drive:” for the source
  6499  -   srcRemote - a path within that remote eg “file.txt” for the source
  6500  -   dstFs - a remote name string eg “drive2:” for the destination
  6501  -   dstRemote - a path within that remote eg “file2.txt” for the
  6502      destination
  6503  
  6504  Authentication is required for this call.
  6505  
  6506  operations/publiclink: Create or retrieve a public link to the given file or folder.
  6507  
  6508  This takes the following parameters
  6509  
  6510  -   fs - a remote name string eg “drive:”
  6511  -   remote - a path within that remote eg “dir”
  6512  
  6513  Returns
  6514  
  6515  -   url - URL of the resource
  6516  
  6517  See the link command command for more information on the above.
  6518  
  6519  Authentication is required for this call.
  6520  
  6521  operations/purge: Remove a directory or container and all of its contents
  6522  
  6523  This takes the following parameters
  6524  
  6525  -   fs - a remote name string eg “drive:”
  6526  -   remote - a path within that remote eg “dir”
  6527  
  6528  See the purge command command for more information on the above.
  6529  
  6530  Authentication is required for this call.
  6531  
  6532  operations/rmdir: Remove an empty directory or container
  6533  
  6534  This takes the following parameters
  6535  
  6536  -   fs - a remote name string eg “drive:”
  6537  -   remote - a path within that remote eg “dir”
  6538  
  6539  See the rmdir command command for more information on the above.
  6540  
  6541  Authentication is required for this call.
  6542  
  6543  operations/rmdirs: Remove all the empty directories in the path
  6544  
  6545  This takes the following parameters
  6546  
  6547  -   fs - a remote name string eg “drive:”
  6548  -   remote - a path within that remote eg “dir”
  6549  -   leaveRoot - boolean, set to true not to delete the root
  6550  
  6551  See the rmdirs command command for more information on the above.
  6552  
  6553  Authentication is required for this call.
  6554  
  6555  operations/size: Count the number of bytes and files in remote
  6556  
  6557  This takes the following parameters
  6558  
  6559  -   fs - a remote name string eg “drive:path/to/dir”
  6560  
  6561  Returns
  6562  
  6563  -   count - number of files
  6564  -   bytes - number of bytes in those files
  6565  
  6566  See the size command command for more information on the above.
  6567  
  6568  Authentication is required for this call.
  6569  
  6570  options/blocks: List all the option blocks
  6571  
  6572  Returns - options - a list of the options block names
  6573  
  6574  options/get: Get all the options
  6575  
  6576  Returns an object where keys are option block names and values are an
  6577  object with the current option values in.
  6578  
  6579  This shows the internal names of the option within rclone which should
  6580  map to the external options very easily with a few exceptions.
  6581  
  6582  options/set: Set an option
  6583  
  6584  Parameters
  6585  
  6586  -   option block name containing an object with
  6587      -   key: value
  6588  
  6589  Repeated as often as required.
  6590  
  6591  Only supply the options you wish to change. If an option is unknown it
  6592  will be silently ignored. Not all options will have an effect when
  6593  changed like this.
  6594  
  6595  For example:
  6596  
  6597  This sets DEBUG level logs (-vv)
  6598  
  6599      rclone rc options/set --json '{"main": {"LogLevel": 8}}'
  6600  
  6601  And this sets INFO level logs (-v)
  6602  
  6603      rclone rc options/set --json '{"main": {"LogLevel": 7}}'
  6604  
  6605  And this sets NOTICE level logs (normal without -v)
  6606  
  6607      rclone rc options/set --json '{"main": {"LogLevel": 6}}'
  6608  
  6609  rc/error: This returns an error
  6610  
  6611  This returns an error with the input as part of its error string. Useful
  6612  for testing error handling.
  6613  
  6614  rc/list: List all the registered remote control commands
  6615  
  6616  This lists all the registered remote control commands as a JSON map in
  6617  the commands response.
  6618  
  6619  rc/noop: Echo the input to the output parameters
  6620  
  6621  This echoes the input parameters to the output parameters for testing
  6622  purposes. It can be used to check that rclone is still alive and to
  6623  check that parameter passing is working properly.
  6624  
  6625  rc/noopauth: Echo the input to the output parameters requiring auth
  6626  
  6627  This echoes the input parameters to the output parameters for testing
  6628  purposes. It can be used to check that rclone is still alive and to
  6629  check that parameter passing is working properly.
  6630  
  6631  Authentication is required for this call.
  6632  
  6633  sync/copy: copy a directory from source remote to destination remote
  6634  
  6635  This takes the following parameters
  6636  
  6637  -   srcFs - a remote name string eg “drive:src” for the source
  6638  -   dstFs - a remote name string eg “drive:dst” for the destination
  6639  
  6640  See the copy command command for more information on the above.
  6641  
  6642  Authentication is required for this call.
  6643  
  6644  sync/move: move a directory from source remote to destination remote
  6645  
  6646  This takes the following parameters
  6647  
  6648  -   srcFs - a remote name string eg “drive:src” for the source
  6649  -   dstFs - a remote name string eg “drive:dst” for the destination
  6650  -   deleteEmptySrcDirs - delete empty src directories if set
  6651  
  6652  See the move command command for more information on the above.
  6653  
  6654  Authentication is required for this call.
  6655  
  6656  sync/sync: sync a directory from source remote to destination remote
  6657  
  6658  This takes the following parameters
  6659  
  6660  -   srcFs - a remote name string eg “drive:src” for the source
  6661  -   dstFs - a remote name string eg “drive:dst” for the destination
  6662  
  6663  See the sync command command for more information on the above.
  6664  
  6665  Authentication is required for this call.
  6666  
  6667  vfs/forget: Forget files or directories in the directory cache.
  6668  
  6669  This forgets the paths in the directory cache causing them to be re-read
  6670  from the remote when needed.
  6671  
  6672  If no paths are passed in then it will forget all the paths in the
  6673  directory cache.
  6674  
  6675      rclone rc vfs/forget
  6676  
  6677  Otherwise pass files or dirs in as file=path or dir=path. Any parameter
  6678  key starting with file will forget that file and any starting with dir
  6679  will forget that dir, eg
  6680  
  6681      rclone rc vfs/forget file=hello file2=goodbye dir=home/junk
  6682  
  6683  vfs/poll-interval: Get the status or update the value of the poll-interval option.
  6684  
  6685  Without any parameter given this returns the current status of the
  6686  poll-interval setting.
  6687  
  6688  When the interval=duration parameter is set, the poll-interval value is
  6689  updated and the polling function is notified. Setting interval=0
  6690  disables poll-interval.
  6691  
  6692      rclone rc vfs/poll-interval interval=5m
  6693  
  6694  The timeout=duration parameter can be used to specify a time to wait for
  6695  the current poll function to apply the new value. If timeout is less or
  6696  equal 0, which is the default, wait indefinitely.
  6697  
  6698  The new poll-interval value will only be active when the timeout is not
  6699  reached.
  6700  
  6701  If poll-interval is updated or disabled temporarily, some changes might
  6702  not get picked up by the polling function, depending on the used remote.
  6703  
  6704  vfs/refresh: Refresh the directory cache.
  6705  
  6706  This reads the directories for the specified paths and freshens the
  6707  directory cache.
  6708  
  6709  If no paths are passed in then it will refresh the root directory.
  6710  
  6711      rclone rc vfs/refresh
  6712  
  6713  Otherwise pass directories in as dir=path. Any parameter key starting
  6714  with dir will refresh that directory, eg
  6715  
  6716      rclone rc vfs/refresh dir=home/junk dir2=data/misc
  6717  
  6718  If the parameter recursive=true is given the whole directory tree will
  6719  get refreshed. This refresh will use –fast-list if enabled.
  6720  
  6721  
  6722  Accessing the remote control via HTTP
  6723  
  6724  Rclone implements a simple HTTP based protocol.
  6725  
  6726  Each endpoint takes an JSON object and returns a JSON object or an
  6727  error. The JSON objects are essentially a map of string names to values.
  6728  
  6729  All calls must made using POST.
  6730  
  6731  The input objects can be supplied using URL parameters, POST parameters
  6732  or by supplying “Content-Type: application/json” and a JSON blob in the
  6733  body. There are examples of these below using curl.
  6734  
  6735  The response will be a JSON blob in the body of the response. This is
  6736  formatted to be reasonably human readable.
  6737  
  6738  Error returns
  6739  
  6740  If an error occurs then there will be an HTTP error status (eg 500) and
  6741  the body of the response will contain a JSON encoded error object, eg
  6742  
  6743      {
  6744          "error": "Expecting string value for key \"remote\" (was float64)",
  6745          "input": {
  6746              "fs": "/tmp",
  6747              "remote": 3
  6748          },
  6749          "status": 400
  6750          "path": "operations/rmdir",
  6751      }
  6752  
  6753  The keys in the error response are - error - error string - input - the
  6754  input parameters to the call - status - the HTTP status code - path -
  6755  the path of the call
  6756  
  6757  CORS
  6758  
  6759  The sever implements basic CORS support and allows all origins for that.
  6760  The response to a preflight OPTIONS request will echo the requested
  6761  “Access-Control-Request-Headers” back.
  6762  
  6763  Using POST with URL parameters only
  6764  
  6765      curl -X POST 'http://localhost:5572/rc/noop?potato=1&sausage=2'
  6766  
  6767  Response
  6768  
  6769      {
  6770          "potato": "1",
  6771          "sausage": "2"
  6772      }
  6773  
  6774  Here is what an error response looks like:
  6775  
  6776      curl -X POST 'http://localhost:5572/rc/error?potato=1&sausage=2'
  6777  
  6778      {
  6779          "error": "arbitrary error on input map[potato:1 sausage:2]",
  6780          "input": {
  6781              "potato": "1",
  6782              "sausage": "2"
  6783          }
  6784      }
  6785  
  6786  Note that curl doesn’t return errors to the shell unless you use the -f
  6787  option
  6788  
  6789      $ curl -f -X POST 'http://localhost:5572/rc/error?potato=1&sausage=2'
  6790      curl: (22) The requested URL returned error: 400 Bad Request
  6791      $ echo $?
  6792      22
  6793  
  6794  Using POST with a form
  6795  
  6796      curl --data "potato=1" --data "sausage=2" http://localhost:5572/rc/noop
  6797  
  6798  Response
  6799  
  6800      {
  6801          "potato": "1",
  6802          "sausage": "2"
  6803      }
  6804  
  6805  Note that you can combine these with URL parameters too with the POST
  6806  parameters taking precedence.
  6807  
  6808      curl --data "potato=1" --data "sausage=2" "http://localhost:5572/rc/noop?rutabaga=3&sausage=4"
  6809  
  6810  Response
  6811  
  6812      {
  6813          "potato": "1",
  6814          "rutabaga": "3",
  6815          "sausage": "4"
  6816      }
  6817  
  6818  Using POST with a JSON blob
  6819  
  6820      curl -H "Content-Type: application/json" -X POST -d '{"potato":2,"sausage":1}' http://localhost:5572/rc/noop
  6821  
  6822  response
  6823  
  6824      {
  6825          "password": "xyz",
  6826          "username": "xyz"
  6827      }
  6828  
  6829  This can be combined with URL parameters too if required. The JSON blob
  6830  takes precedence.
  6831  
  6832      curl -H "Content-Type: application/json" -X POST -d '{"potato":2,"sausage":1}' 'http://localhost:5572/rc/noop?rutabaga=3&potato=4'
  6833  
  6834      {
  6835          "potato": 2,
  6836          "rutabaga": "3",
  6837          "sausage": 1
  6838      }
  6839  
  6840  
  6841  Debugging rclone with pprof
  6842  
  6843  If you use the --rc flag this will also enable the use of the go
  6844  profiling tools on the same port.
  6845  
  6846  To use these, first install go.
  6847  
  6848  Debugging memory use
  6849  
  6850  To profile rclone’s memory use you can run:
  6851  
  6852      go tool pprof -web http://localhost:5572/debug/pprof/heap
  6853  
  6854  This should open a page in your browser showing what is using what
  6855  memory.
  6856  
  6857  You can also use the -text flag to produce a textual summary
  6858  
  6859      $ go tool pprof -text http://localhost:5572/debug/pprof/heap
  6860      Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
  6861            flat  flat%   sum%        cum   cum%
  6862       1024.03kB 66.62% 66.62%  1024.03kB 66.62%  github.com/ncw/rclone/vendor/golang.org/x/net/http2/hpack.addDecoderNode
  6863           513kB 33.38%   100%      513kB 33.38%  net/http.newBufioWriterSize
  6864               0     0%   100%  1024.03kB 66.62%  github.com/ncw/rclone/cmd/all.init
  6865               0     0%   100%  1024.03kB 66.62%  github.com/ncw/rclone/cmd/serve.init
  6866               0     0%   100%  1024.03kB 66.62%  github.com/ncw/rclone/cmd/serve/restic.init
  6867               0     0%   100%  1024.03kB 66.62%  github.com/ncw/rclone/vendor/golang.org/x/net/http2.init
  6868               0     0%   100%  1024.03kB 66.62%  github.com/ncw/rclone/vendor/golang.org/x/net/http2/hpack.init
  6869               0     0%   100%  1024.03kB 66.62%  github.com/ncw/rclone/vendor/golang.org/x/net/http2/hpack.init.0
  6870               0     0%   100%  1024.03kB 66.62%  main.init
  6871               0     0%   100%      513kB 33.38%  net/http.(*conn).readRequest
  6872               0     0%   100%      513kB 33.38%  net/http.(*conn).serve
  6873               0     0%   100%  1024.03kB 66.62%  runtime.main
  6874  
  6875  Debugging go routine leaks
  6876  
  6877  Memory leaks are most often caused by go routine leaks keeping memory
  6878  alive which should have been garbage collected.
  6879  
  6880  See all active go routines using
  6881  
  6882      curl http://localhost:5572/debug/pprof/goroutine?debug=1
  6883  
  6884  Or go to http://localhost:5572/debug/pprof/goroutine?debug=1 in your
  6885  browser.
  6886  
  6887  Other profiles to look at
  6888  
  6889  You can see a summary of profiles available at
  6890  http://localhost:5572/debug/pprof/
  6891  
  6892  Here is how to use some of them:
  6893  
  6894  -   Memory: go tool pprof http://localhost:5572/debug/pprof/heap
  6895  -   Go routines:
  6896      curl http://localhost:5572/debug/pprof/goroutine?debug=1
  6897  -   30-second CPU profile:
  6898      go tool pprof http://localhost:5572/debug/pprof/profile
  6899  -   5-second execution trace:
  6900      wget http://localhost:5572/debug/pprof/trace?seconds=5
  6901  
  6902  See the net/http/pprof docs for more info on how to use the profiling
  6903  and for a general overview see the Go team’s blog post on profiling go
  6904  programs.
  6905  
  6906  The profiling hook is zero overhead unless it is used.
  6907  
  6908  
  6909  
  6910  OVERVIEW OF CLOUD STORAGE SYSTEMS
  6911  
  6912  
  6913  Each cloud storage system is slightly different. Rclone attempts to
  6914  provide a unified interface to them, but some underlying differences
  6915  show through.
  6916  
  6917  
  6918  Features
  6919  
  6920  Here is an overview of the major features of each cloud storage system.
  6921  
  6922    Name                                Hash       ModTime   Case Insensitive   Duplicate Files   MIME Type
  6923    ------------------------------ -------------- --------- ------------------ ----------------- -----------
  6924    Amazon Drive                        MD5          No            Yes                No              R
  6925    Amazon S3                           MD5          Yes            No                No             R/W
  6926    Backblaze B2                        SHA1         Yes            No                No             R/W
  6927    Box                                 SHA1         Yes           Yes                No              -
  6928    Dropbox                           DBHASH †       Yes           Yes                No              -
  6929    FTP                                  -           No             No                No              -
  6930    Google Cloud Storage                MD5          Yes            No                No             R/W
  6931    Google Drive                        MD5          Yes            No                Yes            R/W
  6932    HTTP                                 -           No             No                No              R
  6933    Hubic                               MD5          Yes            No                No             R/W
  6934    Jottacloud                          MD5          Yes           Yes                No             R/W
  6935    Koofr                               MD5          No            Yes                No              -
  6936    Mega                                 -           No             No                Yes             -
  6937    Microsoft Azure Blob Storage        MD5          Yes            No                No             R/W
  6938    Microsoft OneDrive                SHA1 ‡‡        Yes           Yes                No              R
  6939    OpenDrive                           MD5          Yes           Yes                No              -
  6940    Openstack Swift                     MD5          Yes            No                No             R/W
  6941    pCloud                           MD5, SHA1       Yes            No                No              W
  6942    QingStor                            MD5          No             No                No             R/W
  6943    SFTP                            MD5, SHA1 ‡      Yes         Depends              No              -
  6944    WebDAV                          MD5, SHA1 ††   Yes †††       Depends              No              -
  6945    Yandex Disk                         MD5          Yes            No                No             R/W
  6946    The local filesystem                All          Yes         Depends              No              -
  6947  
  6948  Hash
  6949  
  6950  The cloud storage system supports various hash types of the objects. The
  6951  hashes are used when transferring data as an integrity check and can be
  6952  specifically used with the --checksum flag in syncs and in the check
  6953  command.
  6954  
  6955  To use the verify checksums when transferring between cloud storage
  6956  systems they must support a common hash type.
  6957  
  6958  † Note that Dropbox supports its own custom hash. This is an SHA256 sum
  6959  of all the 4MB block SHA256s.
  6960  
  6961  ‡ SFTP supports checksums if the same login has shell access and md5sum
  6962  or sha1sum as well as echo are in the remote’s PATH.
  6963  
  6964  †† WebDAV supports hashes when used with Owncloud and Nextcloud only.
  6965  
  6966  ††† WebDAV supports modtimes when used with Owncloud and Nextcloud only.
  6967  
  6968  ‡‡ Microsoft OneDrive Personal supports SHA1 hashes, whereas OneDrive
  6969  for business and SharePoint server support Microsoft’s own QuickXorHash.
  6970  
  6971  ModTime
  6972  
  6973  The cloud storage system supports setting modification times on objects.
  6974  If it does then this enables a using the modification times as part of
  6975  the sync. If not then only the size will be checked by default, though
  6976  the MD5SUM can be checked with the --checksum flag.
  6977  
  6978  All cloud storage systems support some kind of date on the object and
  6979  these will be set when transferring from the cloud storage system.
  6980  
  6981  Case Insensitive
  6982  
  6983  If a cloud storage systems is case sensitive then it is possible to have
  6984  two files which differ only in case, eg file.txt and FILE.txt. If a
  6985  cloud storage system is case insensitive then that isn’t possible.
  6986  
  6987  This can cause problems when syncing between a case insensitive system
  6988  and a case sensitive system. The symptom of this is that no matter how
  6989  many times you run the sync it never completes fully.
  6990  
  6991  The local filesystem and SFTP may or may not be case sensitive depending
  6992  on OS.
  6993  
  6994  -   Windows - usually case insensitive, though case is preserved
  6995  -   OSX - usually case insensitive, though it is possible to format case
  6996      sensitive
  6997  -   Linux - usually case sensitive, but there are case insensitive file
  6998      systems (eg FAT formatted USB keys)
  6999  
  7000  Most of the time this doesn’t cause any problems as people tend to avoid
  7001  files whose name differs only by case even on case sensitive systems.
  7002  
  7003  Duplicate files
  7004  
  7005  If a cloud storage system allows duplicate files then it can have two
  7006  objects with the same name.
  7007  
  7008  This confuses rclone greatly when syncing - use the rclone dedupe
  7009  command to rename or remove duplicates.
  7010  
  7011  MIME Type
  7012  
  7013  MIME types (also known as media types) classify types of documents using
  7014  a simple text classification, eg text/html or application/pdf.
  7015  
  7016  Some cloud storage systems support reading (R) the MIME type of objects
  7017  and some support writing (W) the MIME type of objects.
  7018  
  7019  The MIME type can be important if you are serving files directly to HTTP
  7020  from the storage system.
  7021  
  7022  If you are copying from a remote which supports reading (R) to a remote
  7023  which supports writing (W) then rclone will preserve the MIME types.
  7024  Otherwise they will be guessed from the extension, or the remote itself
  7025  may assign the MIME type.
  7026  
  7027  
  7028  Optional Features
  7029  
  7030  All the remotes support a basic set of features, but there are some
  7031  optional features supported by some remotes used to make some operations
  7032  more efficient.
  7033  
  7034    Name                            Purge   Copy   Move   DirMove   CleanUp   ListR   StreamUpload   LinkSharing   About
  7035    ------------------------------ ------- ------ ------ --------- --------- ------- -------------- ------------- -------
  7036    Amazon Drive                     Yes     No    Yes      Yes     No #575    No          No         No #2178      No
  7037    Amazon S3                        No     Yes     No      No        No       Yes        Yes         No #2178      No
  7038    Backblaze B2                     No     Yes     No      No        Yes      Yes        Yes         No #2178      No
  7039    Box                              Yes    Yes    Yes      Yes     No #575    No         Yes            Yes        No
  7040    Dropbox                          Yes    Yes    Yes      Yes     No #575    No         Yes            Yes        Yes
  7041    FTP                              No      No    Yes      Yes       No       No         Yes         No #2178      No
  7042    Google Cloud Storage             Yes    Yes     No      No        No       Yes        Yes         No #2178      No
  7043    Google Drive                     Yes    Yes    Yes      Yes       Yes      Yes        Yes            Yes        Yes
  7044    HTTP                             No      No     No      No        No       No          No         No #2178      No
  7045    Hubic                           Yes †   Yes     No      No        No       Yes        Yes         No #2178      Yes
  7046    Jottacloud                       Yes    Yes    Yes      Yes       No       Yes         No            Yes        Yes
  7047    Mega                             Yes     No    Yes      Yes       Yes      No          No         No #2178      Yes
  7048    Microsoft Azure Blob Storage     Yes    Yes     No      No        No       Yes         No         No #2178      No
  7049    Microsoft OneDrive               Yes    Yes    Yes      Yes     No #575    No          No            Yes        Yes
  7050    OpenDrive                        Yes    Yes    Yes      Yes       No       No          No            No         No
  7051    Openstack Swift                 Yes †   Yes     No      No        No       Yes        Yes         No #2178      Yes
  7052    pCloud                           Yes    Yes    Yes      Yes       Yes      No          No         No #2178      Yes
  7053    QingStor                         No     Yes     No      No        No       Yes         No         No #2178      No
  7054    SFTP                             No      No    Yes      Yes       No       No         Yes         No #2178      Yes
  7055    WebDAV                           Yes    Yes    Yes      Yes       No       No        Yes ‡        No #2178      Yes
  7056    Yandex Disk                      Yes    Yes    Yes      Yes       Yes      No         Yes            Yes        Yes
  7057    The local filesystem             Yes     No    Yes      Yes       No       No         Yes            No         Yes
  7058  
  7059  Purge
  7060  
  7061  This deletes a directory quicker than just deleting all the files in the
  7062  directory.
  7063  
  7064  † Note Swift and Hubic implement this in order to delete directory
  7065  markers but they don’t actually have a quicker way of deleting files
  7066  other than deleting them individually.
  7067  
  7068  ‡ StreamUpload is not supported with Nextcloud
  7069  
  7070  Copy
  7071  
  7072  Used when copying an object to and from the same remote. This known as a
  7073  server side copy so you can copy a file without downloading it and
  7074  uploading it again. It is used if you use rclone copy or rclone move if
  7075  the remote doesn’t support Move directly.
  7076  
  7077  If the server doesn’t support Copy directly then for copy operations the
  7078  file is downloaded then re-uploaded.
  7079  
  7080  Move
  7081  
  7082  Used when moving/renaming an object on the same remote. This is known as
  7083  a server side move of a file. This is used in rclone move if the server
  7084  doesn’t support DirMove.
  7085  
  7086  If the server isn’t capable of Move then rclone simulates it with Copy
  7087  then delete. If the server doesn’t support Copy then rclone will
  7088  download the file and re-upload it.
  7089  
  7090  DirMove
  7091  
  7092  This is used to implement rclone move to move a directory if possible.
  7093  If it isn’t then it will use Move on each file (which falls back to Copy
  7094  then download and upload - see Move section).
  7095  
  7096  CleanUp
  7097  
  7098  This is used for emptying the trash for a remote by rclone cleanup.
  7099  
  7100  If the server can’t do CleanUp then rclone cleanup will return an error.
  7101  
  7102  ListR
  7103  
  7104  The remote supports a recursive list to list all the contents beneath a
  7105  directory quickly. This enables the --fast-list flag to work. See the
  7106  rclone docs for more details.
  7107  
  7108  StreamUpload
  7109  
  7110  Some remotes allow files to be uploaded without knowing the file size in
  7111  advance. This allows certain operations to work without spooling the
  7112  file to local disk first, e.g. rclone rcat.
  7113  
  7114  LinkSharing
  7115  
  7116  Sets the necessary permissions on a file or folder and prints a link
  7117  that allows others to access them, even if they don’t have an account on
  7118  the particular cloud provider.
  7119  
  7120  About
  7121  
  7122  This is used to fetch quota information from the remote, like bytes
  7123  used/free/quota and bytes used in the trash.
  7124  
  7125  This is also used to return the space used, available for rclone mount.
  7126  
  7127  If the server can’t do About then rclone about will return an error.
  7128  
  7129  
  7130  Alias
  7131  
  7132  The alias remote provides a new name for another remote.
  7133  
  7134  Paths may be as deep as required or a local path, eg
  7135  remote:directory/subdirectory or /directory/subdirectory.
  7136  
  7137  During the initial setup with rclone config you will specify the target
  7138  remote. The target remote can either be a local path or another remote.
  7139  
  7140  Subfolders can be used in target remote. Assume a alias remote named
  7141  backup with the target mydrive:private/backup. Invoking
  7142  rclone mkdir backup:desktop is exactly the same as invoking
  7143  rclone mkdir mydrive:private/backup/desktop.
  7144  
  7145  There will be no special handling of paths containing .. segments.
  7146  Invoking rclone mkdir backup:../desktop is exactly the same as invoking
  7147  rclone mkdir mydrive:private/backup/../desktop. The empty path is not
  7148  allowed as a remote. To alias the current directory use . instead.
  7149  
  7150  Here is an example of how to make a alias called remote for local
  7151  folder. First run:
  7152  
  7153       rclone config
  7154  
  7155  This will guide you through an interactive setup process:
  7156  
  7157      No remotes found - make a new one
  7158      n) New remote
  7159      s) Set configuration password
  7160      q) Quit config
  7161      n/s/q> n
  7162      name> remote
  7163      Type of storage to configure.
  7164      Choose a number from below, or type in your own value
  7165       1 / Alias for an existing remote
  7166         \ "alias"
  7167       2 / Amazon Drive
  7168         \ "amazon cloud drive"
  7169       3 / Amazon S3 (also Dreamhost, Ceph, Minio)
  7170         \ "s3"
  7171       4 / Backblaze B2
  7172         \ "b2"
  7173       5 / Box
  7174         \ "box"
  7175       6 / Cache a remote
  7176         \ "cache"
  7177       7 / Dropbox
  7178         \ "dropbox"
  7179       8 / Encrypt/Decrypt a remote
  7180         \ "crypt"
  7181       9 / FTP Connection
  7182         \ "ftp"
  7183      10 / Google Cloud Storage (this is not Google Drive)
  7184         \ "google cloud storage"
  7185      11 / Google Drive
  7186         \ "drive"
  7187      12 / Hubic
  7188         \ "hubic"
  7189      13 / Local Disk
  7190         \ "local"
  7191      14 / Microsoft Azure Blob Storage
  7192         \ "azureblob"
  7193      15 / Microsoft OneDrive
  7194         \ "onedrive"
  7195      16 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
  7196         \ "swift"
  7197      17 / Pcloud
  7198         \ "pcloud"
  7199      18 / QingCloud Object Storage
  7200         \ "qingstor"
  7201      19 / SSH/SFTP Connection
  7202         \ "sftp"
  7203      20 / Webdav
  7204         \ "webdav"
  7205      21 / Yandex Disk
  7206         \ "yandex"
  7207      22 / http Connection
  7208         \ "http"
  7209      Storage> 1
  7210      Remote or path to alias.
  7211      Can be "myremote:path/to/dir", "myremote:bucket", "myremote:" or "/local/path".
  7212      remote> /mnt/storage/backup
  7213      Remote config
  7214      --------------------
  7215      [remote]
  7216      remote = /mnt/storage/backup
  7217      --------------------
  7218      y) Yes this is OK
  7219      e) Edit this remote
  7220      d) Delete this remote
  7221      y/e/d> y
  7222      Current remotes:
  7223  
  7224      Name                 Type
  7225      ====                 ====
  7226      remote               alias
  7227  
  7228      e) Edit existing remote
  7229      n) New remote
  7230      d) Delete remote
  7231      r) Rename remote
  7232      c) Copy remote
  7233      s) Set configuration password
  7234      q) Quit config
  7235      e/n/d/r/c/s/q> q
  7236  
  7237  Once configured you can then use rclone like this,
  7238  
  7239  List directories in top level in /mnt/storage/backup
  7240  
  7241      rclone lsd remote:
  7242  
  7243  List all the files in /mnt/storage/backup
  7244  
  7245      rclone ls remote:
  7246  
  7247  Copy another local directory to the alias directory called source
  7248  
  7249      rclone copy /home/source remote:source
  7250  
  7251  Standard Options
  7252  
  7253  Here are the standard options specific to alias (Alias for an existing
  7254  remote).
  7255  
  7256  –alias-remote
  7257  
  7258  Remote or path to alias. Can be “myremote:path/to/dir”,
  7259  “myremote:bucket”, “myremote:” or “/local/path”.
  7260  
  7261  -   Config: remote
  7262  -   Env Var: RCLONE_ALIAS_REMOTE
  7263  -   Type: string
  7264  -   Default: ""
  7265  
  7266  
  7267  Amazon Drive
  7268  
  7269  Amazon Drive, formerly known as Amazon Cloud Drive, is a cloud storage
  7270  service run by Amazon for consumers.
  7271  
  7272  
  7273  Status
  7274  
  7275  IMPORTANT: rclone supports Amazon Drive only if you have your own set of
  7276  API keys. Unfortunately the Amazon Drive developer program is now closed
  7277  to new entries so if you don’t already have your own set of keys you
  7278  will not be able to use rclone with Amazon Drive.
  7279  
  7280  For the history on why rclone no longer has a set of Amazon Drive API
  7281  keys see the forum.
  7282  
  7283  If you happen to know anyone who works at Amazon then please ask them to
  7284  re-instate rclone into the Amazon Drive developer program - thanks!
  7285  
  7286  
  7287  Setup
  7288  
  7289  The initial setup for Amazon Drive involves getting a token from Amazon
  7290  which you need to do in your browser. rclone config walks you through
  7291  it.
  7292  
  7293  The configuration process for Amazon Drive may involve using an oauth
  7294  proxy. This is used to keep the Amazon credentials out of the source
  7295  code. The proxy runs in Google’s very secure App Engine environment and
  7296  doesn’t store any credentials which pass through it.
  7297  
  7298  Since rclone doesn’t currently have its own Amazon Drive credentials so
  7299  you will either need to have your own client_id and client_secret with
  7300  Amazon Drive, or use a a third party ouath proxy in which case you will
  7301  need to enter client_id, client_secret, auth_url and token_url.
  7302  
  7303  Note also if you are not using Amazon’s auth_url and token_url, (ie you
  7304  filled in something for those) then if setting up on a remote machine
  7305  you can only use the copying the config method of configuration -
  7306  rclone authorize will not work.
  7307  
  7308  Here is an example of how to make a remote called remote. First run:
  7309  
  7310       rclone config
  7311  
  7312  This will guide you through an interactive setup process:
  7313  
  7314      No remotes found - make a new one
  7315      n) New remote
  7316      r) Rename remote
  7317      c) Copy remote
  7318      s) Set configuration password
  7319      q) Quit config
  7320      n/r/c/s/q> n
  7321      name> remote
  7322      Type of storage to configure.
  7323      Choose a number from below, or type in your own value
  7324       1 / Amazon Drive
  7325         \ "amazon cloud drive"
  7326       2 / Amazon S3 (also Dreamhost, Ceph, Minio)
  7327         \ "s3"
  7328       3 / Backblaze B2
  7329         \ "b2"
  7330       4 / Dropbox
  7331         \ "dropbox"
  7332       5 / Encrypt/Decrypt a remote
  7333         \ "crypt"
  7334       6 / FTP Connection
  7335         \ "ftp"
  7336       7 / Google Cloud Storage (this is not Google Drive)
  7337         \ "google cloud storage"
  7338       8 / Google Drive
  7339         \ "drive"
  7340       9 / Hubic
  7341         \ "hubic"
  7342      10 / Local Disk
  7343         \ "local"
  7344      11 / Microsoft OneDrive
  7345         \ "onedrive"
  7346      12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
  7347         \ "swift"
  7348      13 / SSH/SFTP Connection
  7349         \ "sftp"
  7350      14 / Yandex Disk
  7351         \ "yandex"
  7352      Storage> 1
  7353      Amazon Application Client Id - required.
  7354      client_id> your client ID goes here
  7355      Amazon Application Client Secret - required.
  7356      client_secret> your client secret goes here
  7357      Auth server URL - leave blank to use Amazon's.
  7358      auth_url> Optional auth URL
  7359      Token server url - leave blank to use Amazon's.
  7360      token_url> Optional token URL
  7361      Remote config
  7362      Make sure your Redirect URL is set to "http://127.0.0.1:53682/" in your custom config.
  7363      Use auto config?
  7364       * Say Y if not sure
  7365       * Say N if you are working on a remote or headless machine
  7366      y) Yes
  7367      n) No
  7368      y/n> y
  7369      If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
  7370      Log in and authorize rclone for access
  7371      Waiting for code...
  7372      Got code
  7373      --------------------
  7374      [remote]
  7375      client_id = your client ID goes here
  7376      client_secret = your client secret goes here
  7377      auth_url = Optional auth URL
  7378      token_url = Optional token URL
  7379      token = {"access_token":"xxxxxxxxxxxxxxxxxxxxxxx","token_type":"bearer","refresh_token":"xxxxxxxxxxxxxxxxxx","expiry":"2015-09-06T16:07:39.658438471+01:00"}
  7380      --------------------
  7381      y) Yes this is OK
  7382      e) Edit this remote
  7383      d) Delete this remote
  7384      y/e/d> y
  7385  
  7386  See the remote setup docs for how to set it up on a machine with no
  7387  Internet browser available.
  7388  
  7389  Note that rclone runs a webserver on your local machine to collect the
  7390  token as returned from Amazon. This only runs from the moment it opens
  7391  your browser to the moment you get back the verification code. This is
  7392  on http://127.0.0.1:53682/ and this it may require you to unblock it
  7393  temporarily if you are running a host firewall.
  7394  
  7395  Once configured you can then use rclone like this,
  7396  
  7397  List directories in top level of your Amazon Drive
  7398  
  7399      rclone lsd remote:
  7400  
  7401  List all the files in your Amazon Drive
  7402  
  7403      rclone ls remote:
  7404  
  7405  To copy a local directory to an Amazon Drive directory called backup
  7406  
  7407      rclone copy /home/source remote:backup
  7408  
  7409  Modified time and MD5SUMs
  7410  
  7411  Amazon Drive doesn’t allow modification times to be changed via the API
  7412  so these won’t be accurate or used for syncing.
  7413  
  7414  It does store MD5SUMs so for a more accurate sync, you can use the
  7415  --checksum flag.
  7416  
  7417  Deleting files
  7418  
  7419  Any files you delete with rclone will end up in the trash. Amazon don’t
  7420  provide an API to permanently delete files, nor to empty the trash, so
  7421  you will have to do that with one of Amazon’s apps or via the Amazon
  7422  Drive website. As of November 17, 2016, files are automatically deleted
  7423  by Amazon from the trash after 30 days.
  7424  
  7425  Using with non .com Amazon accounts
  7426  
  7427  Let’s say you usually use amazon.co.uk. When you authenticate with
  7428  rclone it will take you to an amazon.com page to log in. Your
  7429  amazon.co.uk email and password should work here just fine.
  7430  
  7431  Standard Options
  7432  
  7433  Here are the standard options specific to amazon cloud drive (Amazon
  7434  Drive).
  7435  
  7436  –acd-client-id
  7437  
  7438  Amazon Application Client ID.
  7439  
  7440  -   Config: client_id
  7441  -   Env Var: RCLONE_ACD_CLIENT_ID
  7442  -   Type: string
  7443  -   Default: ""
  7444  
  7445  –acd-client-secret
  7446  
  7447  Amazon Application Client Secret.
  7448  
  7449  -   Config: client_secret
  7450  -   Env Var: RCLONE_ACD_CLIENT_SECRET
  7451  -   Type: string
  7452  -   Default: ""
  7453  
  7454  Advanced Options
  7455  
  7456  Here are the advanced options specific to amazon cloud drive (Amazon
  7457  Drive).
  7458  
  7459  –acd-auth-url
  7460  
  7461  Auth server URL. Leave blank to use Amazon’s.
  7462  
  7463  -   Config: auth_url
  7464  -   Env Var: RCLONE_ACD_AUTH_URL
  7465  -   Type: string
  7466  -   Default: ""
  7467  
  7468  –acd-token-url
  7469  
  7470  Token server url. leave blank to use Amazon’s.
  7471  
  7472  -   Config: token_url
  7473  -   Env Var: RCLONE_ACD_TOKEN_URL
  7474  -   Type: string
  7475  -   Default: ""
  7476  
  7477  –acd-checkpoint
  7478  
  7479  Checkpoint for internal polling (debug).
  7480  
  7481  -   Config: checkpoint
  7482  -   Env Var: RCLONE_ACD_CHECKPOINT
  7483  -   Type: string
  7484  -   Default: ""
  7485  
  7486  –acd-upload-wait-per-gb
  7487  
  7488  Additional time per GB to wait after a failed complete upload to see if
  7489  it appears.
  7490  
  7491  Sometimes Amazon Drive gives an error when a file has been fully
  7492  uploaded but the file appears anyway after a little while. This happens
  7493  sometimes for files over 1GB in size and nearly every time for files
  7494  bigger than 10GB. This parameter controls the time rclone waits for the
  7495  file to appear.
  7496  
  7497  The default value for this parameter is 3 minutes per GB, so by default
  7498  it will wait 3 minutes for every GB uploaded to see if the file appears.
  7499  
  7500  You can disable this feature by setting it to 0. This may cause conflict
  7501  errors as rclone retries the failed upload but the file will most likely
  7502  appear correctly eventually.
  7503  
  7504  These values were determined empirically by observing lots of uploads of
  7505  big files for a range of file sizes.
  7506  
  7507  Upload with the “-v” flag to see more info about what rclone is doing in
  7508  this situation.
  7509  
  7510  -   Config: upload_wait_per_gb
  7511  -   Env Var: RCLONE_ACD_UPLOAD_WAIT_PER_GB
  7512  -   Type: Duration
  7513  -   Default: 3m0s
  7514  
  7515  –acd-templink-threshold
  7516  
  7517  Files >= this size will be downloaded via their tempLink.
  7518  
  7519  Files this size or more will be downloaded via their “tempLink”. This is
  7520  to work around a problem with Amazon Drive which blocks downloads of
  7521  files bigger than about 10GB. The default for this is 9GB which
  7522  shouldn’t need to be changed.
  7523  
  7524  To download files above this threshold, rclone requests a “tempLink”
  7525  which downloads the file through a temporary URL directly from the
  7526  underlying S3 storage.
  7527  
  7528  -   Config: templink_threshold
  7529  -   Env Var: RCLONE_ACD_TEMPLINK_THRESHOLD
  7530  -   Type: SizeSuffix
  7531  -   Default: 9G
  7532  
  7533  Limitations
  7534  
  7535  Note that Amazon Drive is case insensitive so you can’t have a file
  7536  called “Hello.doc” and one called “hello.doc”.
  7537  
  7538  Amazon Drive has rate limiting so you may notice errors in the sync (429
  7539  errors). rclone will automatically retry the sync up to 3 times by
  7540  default (see --retries flag) which should hopefully work around this
  7541  problem.
  7542  
  7543  Amazon Drive has an internal limit of file sizes that can be uploaded to
  7544  the service. This limit is not officially published, but all files
  7545  larger than this will fail.
  7546  
  7547  At the time of writing (Jan 2016) is in the area of 50GB per file. This
  7548  means that larger files are likely to fail.
  7549  
  7550  Unfortunately there is no way for rclone to see that this failure is
  7551  because of file size, so it will retry the operation, as any other
  7552  failure. To avoid this problem, use --max-size 50000M option to limit
  7553  the maximum size of uploaded files. Note that --max-size does not split
  7554  files into segments, it only ignores files over this size.
  7555  
  7556  
  7557  Amazon S3 Storage Providers
  7558  
  7559  The S3 backend can be used with a number of different providers:
  7560  
  7561  -   AWS S3
  7562  -   Alibaba Cloud (Aliyun) Object Storage System (OSS)
  7563  -   Ceph
  7564  -   DigitalOcean Spaces
  7565  -   Dreamhost
  7566  -   IBM COS S3
  7567  -   Minio
  7568  -   Wasabi
  7569  
  7570  Paths are specified as remote:bucket (or remote: for the lsd command.)
  7571  You may put subdirectories in too, eg remote:bucket/path/to/dir.
  7572  
  7573  Once you have made a remote (see the provider specific section above)
  7574  you can use it like this:
  7575  
  7576  See all buckets
  7577  
  7578      rclone lsd remote:
  7579  
  7580  Make a new bucket
  7581  
  7582      rclone mkdir remote:bucket
  7583  
  7584  List the contents of a bucket
  7585  
  7586      rclone ls remote:bucket
  7587  
  7588  Sync /home/local/directory to the remote bucket, deleting any excess
  7589  files in the bucket.
  7590  
  7591      rclone sync /home/local/directory remote:bucket
  7592  
  7593  
  7594  AWS S3
  7595  
  7596  Here is an example of making an s3 configuration. First run
  7597  
  7598      rclone config
  7599  
  7600  This will guide you through an interactive setup process.
  7601  
  7602      No remotes found - make a new one
  7603      n) New remote
  7604      s) Set configuration password
  7605      q) Quit config
  7606      n/s/q> n
  7607      name> remote
  7608      Type of storage to configure.
  7609      Choose a number from below, or type in your own value
  7610       1 / Alias for an existing remote
  7611         \ "alias"
  7612       2 / Amazon Drive
  7613         \ "amazon cloud drive"
  7614       3 / Amazon S3 Compliant Storage Providers (AWS, Ceph, Dreamhost, IBM COS, Minio)
  7615         \ "s3"
  7616       4 / Backblaze B2
  7617         \ "b2"
  7618      [snip]
  7619      23 / http Connection
  7620         \ "http"
  7621      Storage> s3
  7622      Choose your S3 provider.
  7623      Choose a number from below, or type in your own value
  7624       1 / Amazon Web Services (AWS) S3
  7625         \ "AWS"
  7626       2 / Ceph Object Storage
  7627         \ "Ceph"
  7628       3 / Digital Ocean Spaces
  7629         \ "DigitalOcean"
  7630       4 / Dreamhost DreamObjects
  7631         \ "Dreamhost"
  7632       5 / IBM COS S3
  7633         \ "IBMCOS"
  7634       6 / Minio Object Storage
  7635         \ "Minio"
  7636       7 / Wasabi Object Storage
  7637         \ "Wasabi"
  7638       8 / Any other S3 compatible provider
  7639         \ "Other"
  7640      provider> 1
  7641      Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
  7642      Choose a number from below, or type in your own value
  7643       1 / Enter AWS credentials in the next step
  7644         \ "false"
  7645       2 / Get AWS credentials from the environment (env vars or IAM)
  7646         \ "true"
  7647      env_auth> 1
  7648      AWS Access Key ID - leave blank for anonymous access or runtime credentials.
  7649      access_key_id> XXX
  7650      AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
  7651      secret_access_key> YYY
  7652      Region to connect to.
  7653      Choose a number from below, or type in your own value
  7654         / The default endpoint - a good choice if you are unsure.
  7655       1 | US Region, Northern Virginia or Pacific Northwest.
  7656         | Leave location constraint empty.
  7657         \ "us-east-1"
  7658         / US East (Ohio) Region
  7659       2 | Needs location constraint us-east-2.
  7660         \ "us-east-2"
  7661         / US West (Oregon) Region
  7662       3 | Needs location constraint us-west-2.
  7663         \ "us-west-2"
  7664         / US West (Northern California) Region
  7665       4 | Needs location constraint us-west-1.
  7666         \ "us-west-1"
  7667         / Canada (Central) Region
  7668       5 | Needs location constraint ca-central-1.
  7669         \ "ca-central-1"
  7670         / EU (Ireland) Region
  7671       6 | Needs location constraint EU or eu-west-1.
  7672         \ "eu-west-1"
  7673         / EU (London) Region
  7674       7 | Needs location constraint eu-west-2.
  7675         \ "eu-west-2"
  7676         / EU (Frankfurt) Region
  7677       8 | Needs location constraint eu-central-1.
  7678         \ "eu-central-1"
  7679         / Asia Pacific (Singapore) Region
  7680       9 | Needs location constraint ap-southeast-1.
  7681         \ "ap-southeast-1"
  7682         / Asia Pacific (Sydney) Region
  7683      10 | Needs location constraint ap-southeast-2.
  7684         \ "ap-southeast-2"
  7685         / Asia Pacific (Tokyo) Region
  7686      11 | Needs location constraint ap-northeast-1.
  7687         \ "ap-northeast-1"
  7688         / Asia Pacific (Seoul)
  7689      12 | Needs location constraint ap-northeast-2.
  7690         \ "ap-northeast-2"
  7691         / Asia Pacific (Mumbai)
  7692      13 | Needs location constraint ap-south-1.
  7693         \ "ap-south-1"
  7694         / South America (Sao Paulo) Region
  7695      14 | Needs location constraint sa-east-1.
  7696         \ "sa-east-1"
  7697      region> 1
  7698      Endpoint for S3 API.
  7699      Leave blank if using AWS to use the default endpoint for the region.
  7700      endpoint> 
  7701      Location constraint - must be set to match the Region. Used when creating buckets only.
  7702      Choose a number from below, or type in your own value
  7703       1 / Empty for US Region, Northern Virginia or Pacific Northwest.
  7704         \ ""
  7705       2 / US East (Ohio) Region.
  7706         \ "us-east-2"
  7707       3 / US West (Oregon) Region.
  7708         \ "us-west-2"
  7709       4 / US West (Northern California) Region.
  7710         \ "us-west-1"
  7711       5 / Canada (Central) Region.
  7712         \ "ca-central-1"
  7713       6 / EU (Ireland) Region.
  7714         \ "eu-west-1"
  7715       7 / EU (London) Region.
  7716         \ "eu-west-2"
  7717       8 / EU Region.
  7718         \ "EU"
  7719       9 / Asia Pacific (Singapore) Region.
  7720         \ "ap-southeast-1"
  7721      10 / Asia Pacific (Sydney) Region.
  7722         \ "ap-southeast-2"
  7723      11 / Asia Pacific (Tokyo) Region.
  7724         \ "ap-northeast-1"
  7725      12 / Asia Pacific (Seoul)
  7726         \ "ap-northeast-2"
  7727      13 / Asia Pacific (Mumbai)
  7728         \ "ap-south-1"
  7729      14 / South America (Sao Paulo) Region.
  7730         \ "sa-east-1"
  7731      location_constraint> 1
  7732      Canned ACL used when creating buckets and/or storing objects in S3.
  7733      For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
  7734      Choose a number from below, or type in your own value
  7735       1 / Owner gets FULL_CONTROL. No one else has access rights (default).
  7736         \ "private"
  7737       2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access.
  7738         \ "public-read"
  7739         / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access.
  7740       3 | Granting this on a bucket is generally not recommended.
  7741         \ "public-read-write"
  7742       4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access.
  7743         \ "authenticated-read"
  7744         / Object owner gets FULL_CONTROL. Bucket owner gets READ access.
  7745       5 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
  7746         \ "bucket-owner-read"
  7747         / Both the object owner and the bucket owner get FULL_CONTROL over the object.
  7748       6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
  7749         \ "bucket-owner-full-control"
  7750      acl> 1
  7751      The server-side encryption algorithm used when storing this object in S3.
  7752      Choose a number from below, or type in your own value
  7753       1 / None
  7754         \ ""
  7755       2 / AES256
  7756         \ "AES256"
  7757      server_side_encryption> 1
  7758      The storage class to use when storing objects in S3.
  7759      Choose a number from below, or type in your own value
  7760       1 / Default
  7761         \ ""
  7762       2 / Standard storage class
  7763         \ "STANDARD"
  7764       3 / Reduced redundancy storage class
  7765         \ "REDUCED_REDUNDANCY"
  7766       4 / Standard Infrequent Access storage class
  7767         \ "STANDARD_IA"
  7768       5 / One Zone Infrequent Access storage class
  7769         \ "ONEZONE_IA"
  7770       6 / Glacier storage class
  7771         \ "GLACIER"
  7772       7 / Glacier Deep Archive storage class
  7773         \ "DEEP_ARCHIVE"
  7774      storage_class> 1
  7775      Remote config
  7776      --------------------
  7777      [remote]
  7778      type = s3
  7779      provider = AWS
  7780      env_auth = false
  7781      access_key_id = XXX
  7782      secret_access_key = YYY
  7783      region = us-east-1
  7784      endpoint = 
  7785      location_constraint = 
  7786      acl = private
  7787      server_side_encryption = 
  7788      storage_class = 
  7789      --------------------
  7790      y) Yes this is OK
  7791      e) Edit this remote
  7792      d) Delete this remote
  7793      y/e/d> 
  7794  
  7795  –fast-list
  7796  
  7797  This remote supports --fast-list which allows you to use fewer
  7798  transactions in exchange for more memory. See the rclone docs for more
  7799  details.
  7800  
  7801  –update and –use-server-modtime
  7802  
  7803  As noted below, the modified time is stored on metadata on the object.
  7804  It is used by default for all operations that require checking the time
  7805  a file was last updated. It allows rclone to treat the remote more like
  7806  a true filesystem, but it is inefficient because it requires an extra
  7807  API call to retrieve the metadata.
  7808  
  7809  For many operations, the time the object was last uploaded to the remote
  7810  is sufficient to determine if it is “dirty”. By using --update along
  7811  with --use-server-modtime, you can avoid the extra API call and simply
  7812  upload files whose local modtime is newer than the time it was last
  7813  uploaded.
  7814  
  7815  Modified time
  7816  
  7817  The modified time is stored as metadata on the object as
  7818  X-Amz-Meta-Mtime as floating point since the epoch accurate to 1 ns.
  7819  
  7820  If the modification time needs to be updated rclone will attempt to
  7821  perform a server side copy to update the modification if the object can
  7822  be copied in a single part.
  7823  In the case the object is larger than 5Gb or is in Glacier or Glacier
  7824  Deep Archive storage the object will be uploaded rather than copied.
  7825  
  7826  Multipart uploads
  7827  
  7828  rclone supports multipart uploads with S3 which means that it can upload
  7829  files bigger than 5GB.
  7830  
  7831  Note that files uploaded _both_ with multipart upload _and_ through
  7832  crypt remotes do not have MD5 sums.
  7833  
  7834  rclone switches from single part uploads to multipart uploads at the
  7835  point specified by --s3-upload-cutoff. This can be a maximum of 5GB and
  7836  a minimum of 0 (ie always upload multipart files).
  7837  
  7838  The chunk sizes used in the multipart upload are specified by
  7839  --s3-chunk-size and the number of chunks uploaded concurrently is
  7840  specified by --s3-upload-concurrency.
  7841  
  7842  Multipart uploads will use --transfers * --s3-upload-concurrency *
  7843  --s3-chunk-size extra memory. Single part uploads to not use extra
  7844  memory.
  7845  
  7846  Single part transfers can be faster than multipart transfers or slower
  7847  depending on your latency from S3 - the more latency, the more likely
  7848  single part transfers will be faster.
  7849  
  7850  Increasing --s3-upload-concurrency will increase throughput (8 would be
  7851  a sensible value) and increasing --s3-chunk-size also increases
  7852  throughput (16M would be sensible). Increasing either of these will use
  7853  more memory. The default values are high enough to gain most of the
  7854  possible performance without using too much memory.
  7855  
  7856  Buckets and Regions
  7857  
  7858  With Amazon S3 you can list buckets (rclone lsd) using any region, but
  7859  you can only access the content of a bucket from the region it was
  7860  created in. If you attempt to access a bucket from the wrong region, you
  7861  will get an error, incorrect region, the bucket is not in 'XXX' region.
  7862  
  7863  Authentication
  7864  
  7865  There are a number of ways to supply rclone with a set of AWS
  7866  credentials, with and without using the environment.
  7867  
  7868  The different authentication methods are tried in this order:
  7869  
  7870  -   Directly in the rclone configuration file (env_auth = false in the
  7871      config file):
  7872      -   access_key_id and secret_access_key are required.
  7873      -   session_token can be optionally set when using AWS STS.
  7874  -   Runtime configuration (env_auth = true in the config file):
  7875      -   Export the following environment variables before running
  7876          rclone:
  7877          -   Access Key ID: AWS_ACCESS_KEY_ID or AWS_ACCESS_KEY
  7878          -   Secret Access Key: AWS_SECRET_ACCESS_KEY or AWS_SECRET_KEY
  7879          -   Session Token: AWS_SESSION_TOKEN (optional)
  7880      -   Or, use a named profile:
  7881          -   Profile files are standard files used by AWS CLI tools
  7882          -   By default it will use the profile in your home directory
  7883              (eg ~/.aws/credentials on unix based systems) file and the
  7884              “default” profile, to change set these environment
  7885              variables:
  7886              -   AWS_SHARED_CREDENTIALS_FILE to control which file.
  7887              -   AWS_PROFILE to control which profile to use.
  7888      -   Or, run rclone in an ECS task with an IAM role (AWS only).
  7889      -   Or, run rclone on an EC2 instance with an IAM role (AWS only).
  7890  
  7891  If none of these option actually end up providing rclone with AWS
  7892  credentials then S3 interaction will be non-authenticated (see below).
  7893  
  7894  S3 Permissions
  7895  
  7896  When using the sync subcommand of rclone the following minimum
  7897  permissions are required to be available on the bucket being written to:
  7898  
  7899  -   ListBucket
  7900  -   DeleteObject
  7901  -   GetObject
  7902  -   PutObject
  7903  -   PutObjectACL
  7904  
  7905  Example policy:
  7906  
  7907      {
  7908          "Version": "2012-10-17",
  7909          "Statement": [
  7910              {
  7911                  "Effect": "Allow",
  7912                  "Principal": {
  7913                      "AWS": "arn:aws:iam::USER_SID:user/USER_NAME"
  7914                  },
  7915                  "Action": [
  7916                      "s3:ListBucket",
  7917                      "s3:DeleteObject",
  7918                      "s3:GetObject",
  7919                      "s3:PutObject",
  7920                      "s3:PutObjectAcl"
  7921                  ],
  7922                  "Resource": [
  7923                    "arn:aws:s3:::BUCKET_NAME/*",
  7924                    "arn:aws:s3:::BUCKET_NAME"
  7925                  ]
  7926              }
  7927          ]
  7928      }
  7929  
  7930  Notes on above:
  7931  
  7932  1.  This is a policy that can be used when creating bucket. It assumes
  7933      that USER_NAME has been created.
  7934  2.  The Resource entry must include both resource ARNs, as one implies
  7935      the bucket and the other implies the bucket’s objects.
  7936  
  7937  For reference, here’s an Ansible script that will generate one or more
  7938  buckets that will work with rclone sync.
  7939  
  7940  Key Management System (KMS)
  7941  
  7942  If you are using server side encryption with KMS then you will find you
  7943  can’t transfer small objects. As a work-around you can use the
  7944  --ignore-checksum flag.
  7945  
  7946  A proper fix is being worked on in issue #1824.
  7947  
  7948  Glacier and Glacier Deep Archive
  7949  
  7950  You can upload objects using the glacier storage class or transition
  7951  them to glacier using a lifecycle policy. The bucket can still be synced
  7952  or copied into normally, but if rclone tries to access data from the
  7953  glacier storage class you will see an error like below.
  7954  
  7955      2017/09/11 19:07:43 Failed to sync: failed to open source object: Object in GLACIER, restore first: path/to/file
  7956  
  7957  In this case you need to restore the object(s) in question before using
  7958  rclone.
  7959  
  7960  Note that rclone only speaks the S3 API it does not speak the Glacier
  7961  Vault API, so rclone cannot directly access Glacier Vaults.
  7962  
  7963  Standard Options
  7964  
  7965  Here are the standard options specific to s3 (Amazon S3 Compliant
  7966  Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS,
  7967  Minio, etc)).
  7968  
  7969  –s3-provider
  7970  
  7971  Choose your S3 provider.
  7972  
  7973  -   Config: provider
  7974  -   Env Var: RCLONE_S3_PROVIDER
  7975  -   Type: string
  7976  -   Default: ""
  7977  -   Examples:
  7978      -   “AWS”
  7979          -   Amazon Web Services (AWS) S3
  7980      -   “Alibaba”
  7981          -   Alibaba Cloud Object Storage System (OSS) formerly Aliyun
  7982      -   “Ceph”
  7983          -   Ceph Object Storage
  7984      -   “DigitalOcean”
  7985          -   Digital Ocean Spaces
  7986      -   “Dreamhost”
  7987          -   Dreamhost DreamObjects
  7988      -   “IBMCOS”
  7989          -   IBM COS S3
  7990      -   “Minio”
  7991          -   Minio Object Storage
  7992      -   “Netease”
  7993          -   Netease Object Storage (NOS)
  7994      -   “Wasabi”
  7995          -   Wasabi Object Storage
  7996      -   “Other”
  7997          -   Any other S3 compatible provider
  7998  
  7999  –s3-env-auth
  8000  
  8001  Get AWS credentials from runtime (environment variables or EC2/ECS meta
  8002  data if no env vars). Only applies if access_key_id and
  8003  secret_access_key is blank.
  8004  
  8005  -   Config: env_auth
  8006  -   Env Var: RCLONE_S3_ENV_AUTH
  8007  -   Type: bool
  8008  -   Default: false
  8009  -   Examples:
  8010      -   “false”
  8011          -   Enter AWS credentials in the next step
  8012      -   “true”
  8013          -   Get AWS credentials from the environment (env vars or IAM)
  8014  
  8015  –s3-access-key-id
  8016  
  8017  AWS Access Key ID. Leave blank for anonymous access or runtime
  8018  credentials.
  8019  
  8020  -   Config: access_key_id
  8021  -   Env Var: RCLONE_S3_ACCESS_KEY_ID
  8022  -   Type: string
  8023  -   Default: ""
  8024  
  8025  –s3-secret-access-key
  8026  
  8027  AWS Secret Access Key (password) Leave blank for anonymous access or
  8028  runtime credentials.
  8029  
  8030  -   Config: secret_access_key
  8031  -   Env Var: RCLONE_S3_SECRET_ACCESS_KEY
  8032  -   Type: string
  8033  -   Default: ""
  8034  
  8035  –s3-region
  8036  
  8037  Region to connect to.
  8038  
  8039  -   Config: region
  8040  -   Env Var: RCLONE_S3_REGION
  8041  -   Type: string
  8042  -   Default: ""
  8043  -   Examples:
  8044      -   “us-east-1”
  8045          -   The default endpoint - a good choice if you are unsure.
  8046          -   US Region, Northern Virginia or Pacific Northwest.
  8047          -   Leave location constraint empty.
  8048      -   “us-east-2”
  8049          -   US East (Ohio) Region
  8050          -   Needs location constraint us-east-2.
  8051      -   “us-west-2”
  8052          -   US West (Oregon) Region
  8053          -   Needs location constraint us-west-2.
  8054      -   “us-west-1”
  8055          -   US West (Northern California) Region
  8056          -   Needs location constraint us-west-1.
  8057      -   “ca-central-1”
  8058          -   Canada (Central) Region
  8059          -   Needs location constraint ca-central-1.
  8060      -   “eu-west-1”
  8061          -   EU (Ireland) Region
  8062          -   Needs location constraint EU or eu-west-1.
  8063      -   “eu-west-2”
  8064          -   EU (London) Region
  8065          -   Needs location constraint eu-west-2.
  8066      -   “eu-north-1”
  8067          -   EU (Stockholm) Region
  8068          -   Needs location constraint eu-north-1.
  8069      -   “eu-central-1”
  8070          -   EU (Frankfurt) Region
  8071          -   Needs location constraint eu-central-1.
  8072      -   “ap-southeast-1”
  8073          -   Asia Pacific (Singapore) Region
  8074          -   Needs location constraint ap-southeast-1.
  8075      -   “ap-southeast-2”
  8076          -   Asia Pacific (Sydney) Region
  8077          -   Needs location constraint ap-southeast-2.
  8078      -   “ap-northeast-1”
  8079          -   Asia Pacific (Tokyo) Region
  8080          -   Needs location constraint ap-northeast-1.
  8081      -   “ap-northeast-2”
  8082          -   Asia Pacific (Seoul)
  8083          -   Needs location constraint ap-northeast-2.
  8084      -   “ap-south-1”
  8085          -   Asia Pacific (Mumbai)
  8086          -   Needs location constraint ap-south-1.
  8087      -   “sa-east-1”
  8088          -   South America (Sao Paulo) Region
  8089          -   Needs location constraint sa-east-1.
  8090  
  8091  –s3-region
  8092  
  8093  Region to connect to. Leave blank if you are using an S3 clone and you
  8094  don’t have a region.
  8095  
  8096  -   Config: region
  8097  -   Env Var: RCLONE_S3_REGION
  8098  -   Type: string
  8099  -   Default: ""
  8100  -   Examples:
  8101      -   ""
  8102          -   Use this if unsure. Will use v4 signatures and an empty
  8103              region.
  8104      -   “other-v2-signature”
  8105          -   Use this only if v4 signatures don’t work, eg pre Jewel/v10
  8106              CEPH.
  8107  
  8108  –s3-endpoint
  8109  
  8110  Endpoint for S3 API. Leave blank if using AWS to use the default
  8111  endpoint for the region.
  8112  
  8113  -   Config: endpoint
  8114  -   Env Var: RCLONE_S3_ENDPOINT
  8115  -   Type: string
  8116  -   Default: ""
  8117  
  8118  –s3-endpoint
  8119  
  8120  Endpoint for IBM COS S3 API. Specify if using an IBM COS On Premise.
  8121  
  8122  -   Config: endpoint
  8123  -   Env Var: RCLONE_S3_ENDPOINT
  8124  -   Type: string
  8125  -   Default: ""
  8126  -   Examples:
  8127      -   “s3-api.us-geo.objectstorage.softlayer.net”
  8128          -   US Cross Region Endpoint
  8129      -   “s3-api.dal.us-geo.objectstorage.softlayer.net”
  8130          -   US Cross Region Dallas Endpoint
  8131      -   “s3-api.wdc-us-geo.objectstorage.softlayer.net”
  8132          -   US Cross Region Washington DC Endpoint
  8133      -   “s3-api.sjc-us-geo.objectstorage.softlayer.net”
  8134          -   US Cross Region San Jose Endpoint
  8135      -   “s3-api.us-geo.objectstorage.service.networklayer.com”
  8136          -   US Cross Region Private Endpoint
  8137      -   “s3-api.dal-us-geo.objectstorage.service.networklayer.com”
  8138          -   US Cross Region Dallas Private Endpoint
  8139      -   “s3-api.wdc-us-geo.objectstorage.service.networklayer.com”
  8140          -   US Cross Region Washington DC Private Endpoint
  8141      -   “s3-api.sjc-us-geo.objectstorage.service.networklayer.com”
  8142          -   US Cross Region San Jose Private Endpoint
  8143      -   “s3.us-east.objectstorage.softlayer.net”
  8144          -   US Region East Endpoint
  8145      -   “s3.us-east.objectstorage.service.networklayer.com”
  8146          -   US Region East Private Endpoint
  8147      -   “s3.us-south.objectstorage.softlayer.net”
  8148          -   US Region South Endpoint
  8149      -   “s3.us-south.objectstorage.service.networklayer.com”
  8150          -   US Region South Private Endpoint
  8151      -   “s3.eu-geo.objectstorage.softlayer.net”
  8152          -   EU Cross Region Endpoint
  8153      -   “s3.fra-eu-geo.objectstorage.softlayer.net”
  8154          -   EU Cross Region Frankfurt Endpoint
  8155      -   “s3.mil-eu-geo.objectstorage.softlayer.net”
  8156          -   EU Cross Region Milan Endpoint
  8157      -   “s3.ams-eu-geo.objectstorage.softlayer.net”
  8158          -   EU Cross Region Amsterdam Endpoint
  8159      -   “s3.eu-geo.objectstorage.service.networklayer.com”
  8160          -   EU Cross Region Private Endpoint
  8161      -   “s3.fra-eu-geo.objectstorage.service.networklayer.com”
  8162          -   EU Cross Region Frankfurt Private Endpoint
  8163      -   “s3.mil-eu-geo.objectstorage.service.networklayer.com”
  8164          -   EU Cross Region Milan Private Endpoint
  8165      -   “s3.ams-eu-geo.objectstorage.service.networklayer.com”
  8166          -   EU Cross Region Amsterdam Private Endpoint
  8167      -   “s3.eu-gb.objectstorage.softlayer.net”
  8168          -   Great Britain Endpoint
  8169      -   “s3.eu-gb.objectstorage.service.networklayer.com”
  8170          -   Great Britain Private Endpoint
  8171      -   “s3.ap-geo.objectstorage.softlayer.net”
  8172          -   APAC Cross Regional Endpoint
  8173      -   “s3.tok-ap-geo.objectstorage.softlayer.net”
  8174          -   APAC Cross Regional Tokyo Endpoint
  8175      -   “s3.hkg-ap-geo.objectstorage.softlayer.net”
  8176          -   APAC Cross Regional HongKong Endpoint
  8177      -   “s3.seo-ap-geo.objectstorage.softlayer.net”
  8178          -   APAC Cross Regional Seoul Endpoint
  8179      -   “s3.ap-geo.objectstorage.service.networklayer.com”
  8180          -   APAC Cross Regional Private Endpoint
  8181      -   “s3.tok-ap-geo.objectstorage.service.networklayer.com”
  8182          -   APAC Cross Regional Tokyo Private Endpoint
  8183      -   “s3.hkg-ap-geo.objectstorage.service.networklayer.com”
  8184          -   APAC Cross Regional HongKong Private Endpoint
  8185      -   “s3.seo-ap-geo.objectstorage.service.networklayer.com”
  8186          -   APAC Cross Regional Seoul Private Endpoint
  8187      -   “s3.mel01.objectstorage.softlayer.net”
  8188          -   Melbourne Single Site Endpoint
  8189      -   “s3.mel01.objectstorage.service.networklayer.com”
  8190          -   Melbourne Single Site Private Endpoint
  8191      -   “s3.tor01.objectstorage.softlayer.net”
  8192          -   Toronto Single Site Endpoint
  8193      -   “s3.tor01.objectstorage.service.networklayer.com”
  8194          -   Toronto Single Site Private Endpoint
  8195  
  8196  –s3-endpoint
  8197  
  8198  Endpoint for OSS API.
  8199  
  8200  -   Config: endpoint
  8201  -   Env Var: RCLONE_S3_ENDPOINT
  8202  -   Type: string
  8203  -   Default: ""
  8204  -   Examples:
  8205      -   “oss-cn-hangzhou.aliyuncs.com”
  8206          -   East China 1 (Hangzhou)
  8207      -   “oss-cn-shanghai.aliyuncs.com”
  8208          -   East China 2 (Shanghai)
  8209      -   “oss-cn-qingdao.aliyuncs.com”
  8210          -   North China 1 (Qingdao)
  8211      -   “oss-cn-beijing.aliyuncs.com”
  8212          -   North China 2 (Beijing)
  8213      -   “oss-cn-zhangjiakou.aliyuncs.com”
  8214          -   North China 3 (Zhangjiakou)
  8215      -   “oss-cn-huhehaote.aliyuncs.com”
  8216          -   North China 5 (Huhehaote)
  8217      -   “oss-cn-shenzhen.aliyuncs.com”
  8218          -   South China 1 (Shenzhen)
  8219      -   “oss-cn-hongkong.aliyuncs.com”
  8220          -   Hong Kong (Hong Kong)
  8221      -   “oss-us-west-1.aliyuncs.com”
  8222          -   US West 1 (Silicon Valley)
  8223      -   “oss-us-east-1.aliyuncs.com”
  8224          -   US East 1 (Virginia)
  8225      -   “oss-ap-southeast-1.aliyuncs.com”
  8226          -   Southeast Asia Southeast 1 (Singapore)
  8227      -   “oss-ap-southeast-2.aliyuncs.com”
  8228          -   Asia Pacific Southeast 2 (Sydney)
  8229      -   “oss-ap-southeast-3.aliyuncs.com”
  8230          -   Southeast Asia Southeast 3 (Kuala Lumpur)
  8231      -   “oss-ap-southeast-5.aliyuncs.com”
  8232          -   Asia Pacific Southeast 5 (Jakarta)
  8233      -   “oss-ap-northeast-1.aliyuncs.com”
  8234          -   Asia Pacific Northeast 1 (Japan)
  8235      -   “oss-ap-south-1.aliyuncs.com”
  8236          -   Asia Pacific South 1 (Mumbai)
  8237      -   “oss-eu-central-1.aliyuncs.com”
  8238          -   Central Europe 1 (Frankfurt)
  8239      -   “oss-eu-west-1.aliyuncs.com”
  8240          -   West Europe (London)
  8241      -   “oss-me-east-1.aliyuncs.com”
  8242          -   Middle East 1 (Dubai)
  8243  
  8244  –s3-endpoint
  8245  
  8246  Endpoint for S3 API. Required when using an S3 clone.
  8247  
  8248  -   Config: endpoint
  8249  -   Env Var: RCLONE_S3_ENDPOINT
  8250  -   Type: string
  8251  -   Default: ""
  8252  -   Examples:
  8253      -   “objects-us-east-1.dream.io”
  8254          -   Dream Objects endpoint
  8255      -   “nyc3.digitaloceanspaces.com”
  8256          -   Digital Ocean Spaces New York 3
  8257      -   “ams3.digitaloceanspaces.com”
  8258          -   Digital Ocean Spaces Amsterdam 3
  8259      -   “sgp1.digitaloceanspaces.com”
  8260          -   Digital Ocean Spaces Singapore 1
  8261      -   “s3.wasabisys.com”
  8262          -   Wasabi US East endpoint
  8263      -   “s3.us-west-1.wasabisys.com”
  8264          -   Wasabi US West endpoint
  8265      -   “s3.eu-central-1.wasabisys.com”
  8266          -   Wasabi EU Central endpoint
  8267  
  8268  –s3-location-constraint
  8269  
  8270  Location constraint - must be set to match the Region. Used when
  8271  creating buckets only.
  8272  
  8273  -   Config: location_constraint
  8274  -   Env Var: RCLONE_S3_LOCATION_CONSTRAINT
  8275  -   Type: string
  8276  -   Default: ""
  8277  -   Examples:
  8278      -   ""
  8279          -   Empty for US Region, Northern Virginia or Pacific Northwest.
  8280      -   “us-east-2”
  8281          -   US East (Ohio) Region.
  8282      -   “us-west-2”
  8283          -   US West (Oregon) Region.
  8284      -   “us-west-1”
  8285          -   US West (Northern California) Region.
  8286      -   “ca-central-1”
  8287          -   Canada (Central) Region.
  8288      -   “eu-west-1”
  8289          -   EU (Ireland) Region.
  8290      -   “eu-west-2”
  8291          -   EU (London) Region.
  8292      -   “eu-north-1”
  8293          -   EU (Stockholm) Region.
  8294      -   “EU”
  8295          -   EU Region.
  8296      -   “ap-southeast-1”
  8297          -   Asia Pacific (Singapore) Region.
  8298      -   “ap-southeast-2”
  8299          -   Asia Pacific (Sydney) Region.
  8300      -   “ap-northeast-1”
  8301          -   Asia Pacific (Tokyo) Region.
  8302      -   “ap-northeast-2”
  8303          -   Asia Pacific (Seoul)
  8304      -   “ap-south-1”
  8305          -   Asia Pacific (Mumbai)
  8306      -   “sa-east-1”
  8307          -   South America (Sao Paulo) Region.
  8308  
  8309  –s3-location-constraint
  8310  
  8311  Location constraint - must match endpoint when using IBM Cloud Public.
  8312  For on-prem COS, do not make a selection from this list, hit enter
  8313  
  8314  -   Config: location_constraint
  8315  -   Env Var: RCLONE_S3_LOCATION_CONSTRAINT
  8316  -   Type: string
  8317  -   Default: ""
  8318  -   Examples:
  8319      -   “us-standard”
  8320          -   US Cross Region Standard
  8321      -   “us-vault”
  8322          -   US Cross Region Vault
  8323      -   “us-cold”
  8324          -   US Cross Region Cold
  8325      -   “us-flex”
  8326          -   US Cross Region Flex
  8327      -   “us-east-standard”
  8328          -   US East Region Standard
  8329      -   “us-east-vault”
  8330          -   US East Region Vault
  8331      -   “us-east-cold”
  8332          -   US East Region Cold
  8333      -   “us-east-flex”
  8334          -   US East Region Flex
  8335      -   “us-south-standard”
  8336          -   US South Region Standard
  8337      -   “us-south-vault”
  8338          -   US South Region Vault
  8339      -   “us-south-cold”
  8340          -   US South Region Cold
  8341      -   “us-south-flex”
  8342          -   US South Region Flex
  8343      -   “eu-standard”
  8344          -   EU Cross Region Standard
  8345      -   “eu-vault”
  8346          -   EU Cross Region Vault
  8347      -   “eu-cold”
  8348          -   EU Cross Region Cold
  8349      -   “eu-flex”
  8350          -   EU Cross Region Flex
  8351      -   “eu-gb-standard”
  8352          -   Great Britain Standard
  8353      -   “eu-gb-vault”
  8354          -   Great Britain Vault
  8355      -   “eu-gb-cold”
  8356          -   Great Britain Cold
  8357      -   “eu-gb-flex”
  8358          -   Great Britain Flex
  8359      -   “ap-standard”
  8360          -   APAC Standard
  8361      -   “ap-vault”
  8362          -   APAC Vault
  8363      -   “ap-cold”
  8364          -   APAC Cold
  8365      -   “ap-flex”
  8366          -   APAC Flex
  8367      -   “mel01-standard”
  8368          -   Melbourne Standard
  8369      -   “mel01-vault”
  8370          -   Melbourne Vault
  8371      -   “mel01-cold”
  8372          -   Melbourne Cold
  8373      -   “mel01-flex”
  8374          -   Melbourne Flex
  8375      -   “tor01-standard”
  8376          -   Toronto Standard
  8377      -   “tor01-vault”
  8378          -   Toronto Vault
  8379      -   “tor01-cold”
  8380          -   Toronto Cold
  8381      -   “tor01-flex”
  8382          -   Toronto Flex
  8383  
  8384  –s3-location-constraint
  8385  
  8386  Location constraint - must be set to match the Region. Leave blank if
  8387  not sure. Used when creating buckets only.
  8388  
  8389  -   Config: location_constraint
  8390  -   Env Var: RCLONE_S3_LOCATION_CONSTRAINT
  8391  -   Type: string
  8392  -   Default: ""
  8393  
  8394  –s3-acl
  8395  
  8396  Canned ACL used when creating buckets and storing or copying objects.
  8397  
  8398  This ACL is used for creating objects and if bucket_acl isn’t set, for
  8399  creating buckets too.
  8400  
  8401  For more info visit
  8402  https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
  8403  
  8404  Note that this ACL is applied when server side copying objects as S3
  8405  doesn’t copy the ACL from the source but rather writes a fresh one.
  8406  
  8407  -   Config: acl
  8408  -   Env Var: RCLONE_S3_ACL
  8409  -   Type: string
  8410  -   Default: ""
  8411  -   Examples:
  8412      -   “private”
  8413          -   Owner gets FULL_CONTROL. No one else has access rights
  8414              (default).
  8415      -   “public-read”
  8416          -   Owner gets FULL_CONTROL. The AllUsers group gets READ
  8417              access.
  8418      -   “public-read-write”
  8419          -   Owner gets FULL_CONTROL. The AllUsers group gets READ and
  8420              WRITE access.
  8421          -   Granting this on a bucket is generally not recommended.
  8422      -   “authenticated-read”
  8423          -   Owner gets FULL_CONTROL. The AuthenticatedUsers group gets
  8424              READ access.
  8425      -   “bucket-owner-read”
  8426          -   Object owner gets FULL_CONTROL. Bucket owner gets READ
  8427              access.
  8428          -   If you specify this canned ACL when creating a bucket,
  8429              Amazon S3 ignores it.
  8430      -   “bucket-owner-full-control”
  8431          -   Both the object owner and the bucket owner get FULL_CONTROL
  8432              over the object.
  8433          -   If you specify this canned ACL when creating a bucket,
  8434              Amazon S3 ignores it.
  8435      -   “private”
  8436          -   Owner gets FULL_CONTROL. No one else has access rights
  8437              (default). This acl is available on IBM Cloud (Infra), IBM
  8438              Cloud (Storage), On-Premise COS
  8439      -   “public-read”
  8440          -   Owner gets FULL_CONTROL. The AllUsers group gets READ
  8441              access. This acl is available on IBM Cloud (Infra), IBM
  8442              Cloud (Storage), On-Premise IBM COS
  8443      -   “public-read-write”
  8444          -   Owner gets FULL_CONTROL. The AllUsers group gets READ and
  8445              WRITE access. This acl is available on IBM Cloud (Infra),
  8446              On-Premise IBM COS
  8447      -   “authenticated-read”
  8448          -   Owner gets FULL_CONTROL. The AuthenticatedUsers group gets
  8449              READ access. Not supported on Buckets. This acl is available
  8450              on IBM Cloud (Infra) and On-Premise IBM COS
  8451  
  8452  –s3-server-side-encryption
  8453  
  8454  The server-side encryption algorithm used when storing this object in
  8455  S3.
  8456  
  8457  -   Config: server_side_encryption
  8458  -   Env Var: RCLONE_S3_SERVER_SIDE_ENCRYPTION
  8459  -   Type: string
  8460  -   Default: ""
  8461  -   Examples:
  8462      -   ""
  8463          -   None
  8464      -   “AES256”
  8465          -   AES256
  8466      -   “aws:kms”
  8467          -   aws:kms
  8468  
  8469  –s3-sse-kms-key-id
  8470  
  8471  If using KMS ID you must provide the ARN of Key.
  8472  
  8473  -   Config: sse_kms_key_id
  8474  -   Env Var: RCLONE_S3_SSE_KMS_KEY_ID
  8475  -   Type: string
  8476  -   Default: ""
  8477  -   Examples:
  8478      -   ""
  8479          -   None
  8480      -   "arn:aws:kms:us-east-1:*"
  8481          -   arn:aws:kms:*
  8482  
  8483  –s3-storage-class
  8484  
  8485  The storage class to use when storing new objects in S3.
  8486  
  8487  -   Config: storage_class
  8488  -   Env Var: RCLONE_S3_STORAGE_CLASS
  8489  -   Type: string
  8490  -   Default: ""
  8491  -   Examples:
  8492      -   ""
  8493          -   Default
  8494      -   “STANDARD”
  8495          -   Standard storage class
  8496      -   “REDUCED_REDUNDANCY”
  8497          -   Reduced redundancy storage class
  8498      -   “STANDARD_IA”
  8499          -   Standard Infrequent Access storage class
  8500      -   “ONEZONE_IA”
  8501          -   One Zone Infrequent Access storage class
  8502      -   “GLACIER”
  8503          -   Glacier storage class
  8504      -   “DEEP_ARCHIVE”
  8505          -   Glacier Deep Archive storage class
  8506  
  8507  –s3-storage-class
  8508  
  8509  The storage class to use when storing new objects in OSS.
  8510  
  8511  -   Config: storage_class
  8512  -   Env Var: RCLONE_S3_STORAGE_CLASS
  8513  -   Type: string
  8514  -   Default: ""
  8515  -   Examples:
  8516      -   ""
  8517          -   Default
  8518      -   “STANDARD”
  8519          -   Standard storage class
  8520      -   “GLACIER”
  8521          -   Archive storage mode.
  8522      -   “STANDARD_IA”
  8523          -   Infrequent access storage mode.
  8524  
  8525  Advanced Options
  8526  
  8527  Here are the advanced options specific to s3 (Amazon S3 Compliant
  8528  Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS,
  8529  Minio, etc)).
  8530  
  8531  –s3-bucket-acl
  8532  
  8533  Canned ACL used when creating buckets.
  8534  
  8535  For more info visit
  8536  https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
  8537  
  8538  Note that this ACL is applied when only when creating buckets. If it
  8539  isn’t set then “acl” is used instead.
  8540  
  8541  -   Config: bucket_acl
  8542  -   Env Var: RCLONE_S3_BUCKET_ACL
  8543  -   Type: string
  8544  -   Default: ""
  8545  -   Examples:
  8546      -   “private”
  8547          -   Owner gets FULL_CONTROL. No one else has access rights
  8548              (default).
  8549      -   “public-read”
  8550          -   Owner gets FULL_CONTROL. The AllUsers group gets READ
  8551              access.
  8552      -   “public-read-write”
  8553          -   Owner gets FULL_CONTROL. The AllUsers group gets READ and
  8554              WRITE access.
  8555          -   Granting this on a bucket is generally not recommended.
  8556      -   “authenticated-read”
  8557          -   Owner gets FULL_CONTROL. The AuthenticatedUsers group gets
  8558              READ access.
  8559  
  8560  –s3-upload-cutoff
  8561  
  8562  Cutoff for switching to chunked upload
  8563  
  8564  Any files larger than this will be uploaded in chunks of chunk_size. The
  8565  minimum is 0 and the maximum is 5GB.
  8566  
  8567  -   Config: upload_cutoff
  8568  -   Env Var: RCLONE_S3_UPLOAD_CUTOFF
  8569  -   Type: SizeSuffix
  8570  -   Default: 200M
  8571  
  8572  –s3-chunk-size
  8573  
  8574  Chunk size to use for uploading.
  8575  
  8576  When uploading files larger than upload_cutoff they will be uploaded as
  8577  multipart uploads using this chunk size.
  8578  
  8579  Note that “–s3-upload-concurrency” chunks of this size are buffered in
  8580  memory per transfer.
  8581  
  8582  If you are transferring large files over high speed links and you have
  8583  enough memory, then increasing this will speed up the transfers.
  8584  
  8585  -   Config: chunk_size
  8586  -   Env Var: RCLONE_S3_CHUNK_SIZE
  8587  -   Type: SizeSuffix
  8588  -   Default: 5M
  8589  
  8590  –s3-disable-checksum
  8591  
  8592  Don’t store MD5 checksum with object metadata
  8593  
  8594  -   Config: disable_checksum
  8595  -   Env Var: RCLONE_S3_DISABLE_CHECKSUM
  8596  -   Type: bool
  8597  -   Default: false
  8598  
  8599  –s3-session-token
  8600  
  8601  An AWS session token
  8602  
  8603  -   Config: session_token
  8604  -   Env Var: RCLONE_S3_SESSION_TOKEN
  8605  -   Type: string
  8606  -   Default: ""
  8607  
  8608  –s3-upload-concurrency
  8609  
  8610  Concurrency for multipart uploads.
  8611  
  8612  This is the number of chunks of the same file that are uploaded
  8613  concurrently.
  8614  
  8615  If you are uploading small numbers of large file over high speed link
  8616  and these uploads do not fully utilize your bandwidth, then increasing
  8617  this may help to speed up the transfers.
  8618  
  8619  -   Config: upload_concurrency
  8620  -   Env Var: RCLONE_S3_UPLOAD_CONCURRENCY
  8621  -   Type: int
  8622  -   Default: 4
  8623  
  8624  –s3-force-path-style
  8625  
  8626  If true use path style access if false use virtual hosted style.
  8627  
  8628  If this is true (the default) then rclone will use path style access, if
  8629  false then rclone will use virtual path style. See the AWS S3 docs for
  8630  more info.
  8631  
  8632  Some providers (eg Aliyun OSS or Netease COS) require this set to false.
  8633  
  8634  -   Config: force_path_style
  8635  -   Env Var: RCLONE_S3_FORCE_PATH_STYLE
  8636  -   Type: bool
  8637  -   Default: true
  8638  
  8639  –s3-v2-auth
  8640  
  8641  If true use v2 authentication.
  8642  
  8643  If this is false (the default) then rclone will use v4 authentication.
  8644  If it is set then rclone will use v2 authentication.
  8645  
  8646  Use this only if v4 signatures don’t work, eg pre Jewel/v10 CEPH.
  8647  
  8648  -   Config: v2_auth
  8649  -   Env Var: RCLONE_S3_V2_AUTH
  8650  -   Type: bool
  8651  -   Default: false
  8652  
  8653  –s3-use-accelerate-endpoint
  8654  
  8655  If true use the AWS S3 accelerated endpoint.
  8656  
  8657  See: AWS S3 Transfer acceleration
  8658  
  8659  -   Config: use_accelerate_endpoint
  8660  -   Env Var: RCLONE_S3_USE_ACCELERATE_ENDPOINT
  8661  -   Type: bool
  8662  -   Default: false
  8663  
  8664  Anonymous access to public buckets
  8665  
  8666  If you want to use rclone to access a public bucket, configure with a
  8667  blank access_key_id and secret_access_key. Your config should end up
  8668  looking like this:
  8669  
  8670      [anons3]
  8671      type = s3
  8672      provider = AWS
  8673      env_auth = false
  8674      access_key_id = 
  8675      secret_access_key = 
  8676      region = us-east-1
  8677      endpoint = 
  8678      location_constraint = 
  8679      acl = private
  8680      server_side_encryption = 
  8681      storage_class = 
  8682  
  8683  Then use it as normal with the name of the public bucket, eg
  8684  
  8685      rclone lsd anons3:1000genomes
  8686  
  8687  You will be able to list and copy data but not upload it.
  8688  
  8689  Ceph
  8690  
  8691  Ceph is an open source unified, distributed storage system designed for
  8692  excellent performance, reliability and scalability. It has an S3
  8693  compatible object storage interface.
  8694  
  8695  To use rclone with Ceph, configure as above but leave the region blank
  8696  and set the endpoint. You should end up with something like this in your
  8697  config:
  8698  
  8699      [ceph]
  8700      type = s3
  8701      provider = Ceph
  8702      env_auth = false
  8703      access_key_id = XXX
  8704      secret_access_key = YYY
  8705      region =
  8706      endpoint = https://ceph.endpoint.example.com
  8707      location_constraint =
  8708      acl =
  8709      server_side_encryption =
  8710      storage_class =
  8711  
  8712  If you are using an older version of CEPH, eg 10.2.x Jewel, then you may
  8713  need to supply the parameter --s3-upload-cutoff 0 or put this in the
  8714  config file as upload_cutoff 0 to work around a bug which causes
  8715  uploading of small files to fail.
  8716  
  8717  Note also that Ceph sometimes puts / in the passwords it gives users. If
  8718  you read the secret access key using the command line tools you will get
  8719  a JSON blob with the / escaped as \/. Make sure you only write / in the
  8720  secret access key.
  8721  
  8722  Eg the dump from Ceph looks something like this (irrelevant keys
  8723  removed).
  8724  
  8725      {
  8726          "user_id": "xxx",
  8727          "display_name": "xxxx",
  8728          "keys": [
  8729              {
  8730                  "user": "xxx",
  8731                  "access_key": "xxxxxx",
  8732                  "secret_key": "xxxxxx\/xxxx"
  8733              }
  8734          ],
  8735      }
  8736  
  8737  Because this is a json dump, it is encoding the / as \/, so if you use
  8738  the secret key as xxxxxx/xxxx it will work fine.
  8739  
  8740  Dreamhost
  8741  
  8742  Dreamhost DreamObjects is an object storage system based on CEPH.
  8743  
  8744  To use rclone with Dreamhost, configure as above but leave the region
  8745  blank and set the endpoint. You should end up with something like this
  8746  in your config:
  8747  
  8748      [dreamobjects]
  8749      type = s3
  8750      provider = DreamHost
  8751      env_auth = false
  8752      access_key_id = your_access_key
  8753      secret_access_key = your_secret_key
  8754      region =
  8755      endpoint = objects-us-west-1.dream.io
  8756      location_constraint =
  8757      acl = private
  8758      server_side_encryption =
  8759      storage_class =
  8760  
  8761  DigitalOcean Spaces
  8762  
  8763  Spaces is an S3-interoperable object storage service from cloud provider
  8764  DigitalOcean.
  8765  
  8766  To connect to DigitalOcean Spaces you will need an access key and secret
  8767  key. These can be retrieved on the “Applications & API” page of the
  8768  DigitalOcean control panel. They will be needed when promted by
  8769  rclone config for your access_key_id and secret_access_key.
  8770  
  8771  When prompted for a region or location_constraint, press enter to use
  8772  the default value. The region must be included in the endpoint setting
  8773  (e.g. nyc3.digitaloceanspaces.com). The default values can be used for
  8774  other settings.
  8775  
  8776  Going through the whole process of creating a new remote by running
  8777  rclone config, each prompt should be answered as shown below:
  8778  
  8779      Storage> s3
  8780      env_auth> 1
  8781      access_key_id> YOUR_ACCESS_KEY
  8782      secret_access_key> YOUR_SECRET_KEY
  8783      region>
  8784      endpoint> nyc3.digitaloceanspaces.com
  8785      location_constraint>
  8786      acl>
  8787      storage_class>
  8788  
  8789  The resulting configuration file should look like:
  8790  
  8791      [spaces]
  8792      type = s3
  8793      provider = DigitalOcean
  8794      env_auth = false
  8795      access_key_id = YOUR_ACCESS_KEY
  8796      secret_access_key = YOUR_SECRET_KEY
  8797      region =
  8798      endpoint = nyc3.digitaloceanspaces.com
  8799      location_constraint =
  8800      acl =
  8801      server_side_encryption =
  8802      storage_class =
  8803  
  8804  Once configured, you can create a new Space and begin copying files. For
  8805  example:
  8806  
  8807      rclone mkdir spaces:my-new-space
  8808      rclone copy /path/to/files spaces:my-new-space
  8809  
  8810  IBM COS (S3)
  8811  
  8812  Information stored with IBM Cloud Object Storage is encrypted and
  8813  dispersed across multiple geographic locations, and accessed through an
  8814  implementation of the S3 API. This service makes use of the distributed
  8815  storage technologies provided by IBM’s Cloud Object Storage System
  8816  (formerly Cleversafe). For more information visit:
  8817  (http://www.ibm.com/cloud/object-storage)
  8818  
  8819  To configure access to IBM COS S3, follow the steps below:
  8820  
  8821  1.  Run rclone config and select n for a new remote.
  8822  
  8823          2018/02/14 14:13:11 NOTICE: Config file "C:\\Users\\a\\.config\\rclone\\rclone.conf" not found - using defaults
  8824          No remotes found - make a new one
  8825          n) New remote
  8826          s) Set configuration password
  8827          q) Quit config
  8828          n/s/q> n
  8829  
  8830  2.  Enter the name for the configuration
  8831  
  8832          name> <YOUR NAME>
  8833  
  8834  3.  Select “s3” storage.
  8835  
  8836      Choose a number from below, or type in your own value
  8837          1 / Alias for an existing remote
  8838          \ "alias"
  8839          2 / Amazon Drive
  8840          \ "amazon cloud drive"
  8841          3 / Amazon S3 Complaint Storage Providers (Dreamhost, Ceph, Minio, IBM COS)
  8842          \ "s3"
  8843          4 / Backblaze B2
  8844          \ "b2"
  8845      [snip]
  8846          23 / http Connection
  8847          \ "http"
  8848      Storage> 3
  8849  
  8850  4.  Select IBM COS as the S3 Storage Provider.
  8851  
  8852      Choose the S3 provider.
  8853      Choose a number from below, or type in your own value
  8854           1 / Choose this option to configure Storage to AWS S3
  8855             \ "AWS"
  8856           2 / Choose this option to configure Storage to Ceph Systems
  8857           \ "Ceph"
  8858           3 /  Choose this option to configure Storage to Dreamhost
  8859           \ "Dreamhost"
  8860         4 / Choose this option to the configure Storage to IBM COS S3
  8861           \ "IBMCOS"
  8862           5 / Choose this option to the configure Storage to Minio
  8863           \ "Minio"
  8864           Provider>4
  8865  
  8866  5.  Enter the Access Key and Secret.
  8867  
  8868          AWS Access Key ID - leave blank for anonymous access or runtime credentials.
  8869          access_key_id> <>
  8870          AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
  8871          secret_access_key> <>
  8872  
  8873  6.  Specify the endpoint for IBM COS. For Public IBM COS, choose from
  8874      the option below. For On Premise IBM COS, enter an enpoint address.
  8875  
  8876          Endpoint for IBM COS S3 API.
  8877          Specify if using an IBM COS On Premise.
  8878          Choose a number from below, or type in your own value
  8879           1 / US Cross Region Endpoint
  8880             \ "s3-api.us-geo.objectstorage.softlayer.net"
  8881           2 / US Cross Region Dallas Endpoint
  8882             \ "s3-api.dal.us-geo.objectstorage.softlayer.net"
  8883           3 / US Cross Region Washington DC Endpoint
  8884             \ "s3-api.wdc-us-geo.objectstorage.softlayer.net"
  8885           4 / US Cross Region San Jose Endpoint
  8886             \ "s3-api.sjc-us-geo.objectstorage.softlayer.net"
  8887           5 / US Cross Region Private Endpoint
  8888             \ "s3-api.us-geo.objectstorage.service.networklayer.com"
  8889           6 / US Cross Region Dallas Private Endpoint
  8890             \ "s3-api.dal-us-geo.objectstorage.service.networklayer.com"
  8891           7 / US Cross Region Washington DC Private Endpoint
  8892             \ "s3-api.wdc-us-geo.objectstorage.service.networklayer.com"
  8893           8 / US Cross Region San Jose Private Endpoint
  8894             \ "s3-api.sjc-us-geo.objectstorage.service.networklayer.com"
  8895           9 / US Region East Endpoint
  8896             \ "s3.us-east.objectstorage.softlayer.net"
  8897          10 / US Region East Private Endpoint
  8898             \ "s3.us-east.objectstorage.service.networklayer.com"
  8899          11 / US Region South Endpoint
  8900      [snip]
  8901          34 / Toronto Single Site Private Endpoint
  8902             \ "s3.tor01.objectstorage.service.networklayer.com"
  8903          endpoint>1
  8904  
  8905  7.  Specify a IBM COS Location Constraint. The location constraint must
  8906      match endpoint when using IBM Cloud Public. For on-prem COS, do not
  8907      make a selection from this list, hit enter
  8908  
  8909           1 / US Cross Region Standard
  8910             \ "us-standard"
  8911           2 / US Cross Region Vault
  8912             \ "us-vault"
  8913           3 / US Cross Region Cold
  8914             \ "us-cold"
  8915           4 / US Cross Region Flex
  8916             \ "us-flex"
  8917           5 / US East Region Standard
  8918             \ "us-east-standard"
  8919           6 / US East Region Vault
  8920             \ "us-east-vault"
  8921           7 / US East Region Cold
  8922             \ "us-east-cold"
  8923           8 / US East Region Flex
  8924             \ "us-east-flex"
  8925           9 / US South Region Standard
  8926             \ "us-south-standard"
  8927          10 / US South Region Vault
  8928             \ "us-south-vault"
  8929      [snip]
  8930          32 / Toronto Flex
  8931             \ "tor01-flex"
  8932      location_constraint>1
  8933  
  8934  9.  Specify a canned ACL. IBM Cloud (Strorage) supports “public-read”
  8935      and “private”. IBM Cloud(Infra) supports all the canned ACLs.
  8936      On-Premise COS supports all the canned ACLs.
  8937  
  8938      Canned ACL used when creating buckets and/or storing objects in S3.
  8939      For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
  8940      Choose a number from below, or type in your own value
  8941            1 / Owner gets FULL_CONTROL. No one else has access rights (default). This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise COS
  8942            \ "private"
  8943            2  / Owner gets FULL_CONTROL. The AllUsers group gets READ access. This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise IBM COS
  8944            \ "public-read"
  8945            3 / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. This acl is available on IBM Cloud (Infra), On-Premise IBM COS
  8946            \ "public-read-write"
  8947            4  / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access. Not supported on Buckets. This acl is available on IBM Cloud (Infra) and On-Premise IBM COS
  8948            \ "authenticated-read"
  8949      acl> 1
  8950  
  8951  12. Review the displayed configuration and accept to save the “remote”
  8952      then quit. The config file should look like this
  8953  
  8954          [xxx]
  8955          type = s3
  8956          Provider = IBMCOS
  8957          access_key_id = xxx
  8958          secret_access_key = yyy
  8959          endpoint = s3-api.us-geo.objectstorage.softlayer.net
  8960          location_constraint = us-standard
  8961          acl = private
  8962  
  8963  13. Execute rclone commands
  8964  
  8965          1)  Create a bucket.
  8966              rclone mkdir IBM-COS-XREGION:newbucket
  8967          2)  List available buckets.
  8968              rclone lsd IBM-COS-XREGION:
  8969              -1 2017-11-08 21:16:22        -1 test
  8970              -1 2018-02-14 20:16:39        -1 newbucket
  8971          3)  List contents of a bucket.
  8972              rclone ls IBM-COS-XREGION:newbucket
  8973              18685952 test.exe
  8974          4)  Copy a file from local to remote.
  8975              rclone copy /Users/file.txt IBM-COS-XREGION:newbucket
  8976          5)  Copy a file from remote to local.
  8977              rclone copy IBM-COS-XREGION:newbucket/file.txt .
  8978          6)  Delete a file on remote.
  8979              rclone delete IBM-COS-XREGION:newbucket/file.txt
  8980  
  8981  Minio
  8982  
  8983  Minio is an object storage server built for cloud application developers
  8984  and devops.
  8985  
  8986  It is very easy to install and provides an S3 compatible server which
  8987  can be used by rclone.
  8988  
  8989  To use it, install Minio following the instructions here.
  8990  
  8991  When it configures itself Minio will print something like this
  8992  
  8993      Endpoint:  http://192.168.1.106:9000  http://172.23.0.1:9000
  8994      AccessKey: USWUXHGYZQYFYFFIT3RE
  8995      SecretKey: MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
  8996      Region:    us-east-1
  8997      SQS ARNs:  arn:minio:sqs:us-east-1:1:redis arn:minio:sqs:us-east-1:2:redis
  8998  
  8999      Browser Access:
  9000         http://192.168.1.106:9000  http://172.23.0.1:9000
  9001  
  9002      Command-line Access: https://docs.minio.io/docs/minio-client-quickstart-guide
  9003         $ mc config host add myminio http://192.168.1.106:9000 USWUXHGYZQYFYFFIT3RE MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
  9004  
  9005      Object API (Amazon S3 compatible):
  9006         Go:         https://docs.minio.io/docs/golang-client-quickstart-guide
  9007         Java:       https://docs.minio.io/docs/java-client-quickstart-guide
  9008         Python:     https://docs.minio.io/docs/python-client-quickstart-guide
  9009         JavaScript: https://docs.minio.io/docs/javascript-client-quickstart-guide
  9010         .NET:       https://docs.minio.io/docs/dotnet-client-quickstart-guide
  9011  
  9012      Drive Capacity: 26 GiB Free, 165 GiB Total
  9013  
  9014  These details need to go into rclone config like this. Note that it is
  9015  important to put the region in as stated above.
  9016  
  9017      env_auth> 1
  9018      access_key_id> USWUXHGYZQYFYFFIT3RE
  9019      secret_access_key> MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
  9020      region> us-east-1
  9021      endpoint> http://192.168.1.106:9000
  9022      location_constraint>
  9023      server_side_encryption>
  9024  
  9025  Which makes the config file look like this
  9026  
  9027      [minio]
  9028      type = s3
  9029      provider = Minio
  9030      env_auth = false
  9031      access_key_id = USWUXHGYZQYFYFFIT3RE
  9032      secret_access_key = MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
  9033      region = us-east-1
  9034      endpoint = http://192.168.1.106:9000
  9035      location_constraint =
  9036      server_side_encryption =
  9037  
  9038  So once set up, for example to copy files into a bucket
  9039  
  9040      rclone copy /path/to/files minio:bucket
  9041  
  9042  Scaleway
  9043  
  9044  Scaleway The Object Storage platform allows you to store anything from
  9045  backups, logs and web assets to documents and photos. Files can be
  9046  dropped from the Scaleway console or transferred through our API and CLI
  9047  or using any S3-compatible tool.
  9048  
  9049  Scaleway provides an S3 interface which can be configured for use with
  9050  rclone like this:
  9051  
  9052      [scaleway]
  9053      type = s3
  9054      env_auth = false
  9055      endpoint = s3.nl-ams.scw.cloud
  9056      access_key_id = SCWXXXXXXXXXXXXXX
  9057      secret_access_key = 1111111-2222-3333-44444-55555555555555
  9058      region = nl-ams
  9059      location_constraint =
  9060      acl = private
  9061      force_path_style = false
  9062      server_side_encryption =
  9063      storage_class =
  9064  
  9065  Wasabi
  9066  
  9067  Wasabi is a cloud-based object storage service for a broad range of
  9068  applications and use cases. Wasabi is designed for individuals and
  9069  organizations that require a high-performance, reliable, and secure data
  9070  storage infrastructure at minimal cost.
  9071  
  9072  Wasabi provides an S3 interface which can be configured for use with
  9073  rclone like this.
  9074  
  9075      No remotes found - make a new one
  9076      n) New remote
  9077      s) Set configuration password
  9078      n/s> n
  9079      name> wasabi
  9080      Type of storage to configure.
  9081      Choose a number from below, or type in your own value
  9082       1 / Amazon Drive
  9083         \ "amazon cloud drive"
  9084       2 / Amazon S3 (also Dreamhost, Ceph, Minio)
  9085         \ "s3"
  9086      [snip]
  9087      Storage> s3
  9088      Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
  9089      Choose a number from below, or type in your own value
  9090       1 / Enter AWS credentials in the next step
  9091         \ "false"
  9092       2 / Get AWS credentials from the environment (env vars or IAM)
  9093         \ "true"
  9094      env_auth> 1
  9095      AWS Access Key ID - leave blank for anonymous access or runtime credentials.
  9096      access_key_id> YOURACCESSKEY
  9097      AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
  9098      secret_access_key> YOURSECRETACCESSKEY
  9099      Region to connect to.
  9100      Choose a number from below, or type in your own value
  9101         / The default endpoint - a good choice if you are unsure.
  9102       1 | US Region, Northern Virginia or Pacific Northwest.
  9103         | Leave location constraint empty.
  9104         \ "us-east-1"
  9105      [snip]
  9106      region> us-east-1
  9107      Endpoint for S3 API.
  9108      Leave blank if using AWS to use the default endpoint for the region.
  9109      Specify if using an S3 clone such as Ceph.
  9110      endpoint> s3.wasabisys.com
  9111      Location constraint - must be set to match the Region. Used when creating buckets only.
  9112      Choose a number from below, or type in your own value
  9113       1 / Empty for US Region, Northern Virginia or Pacific Northwest.
  9114         \ ""
  9115      [snip]
  9116      location_constraint>
  9117      Canned ACL used when creating buckets and/or storing objects in S3.
  9118      For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
  9119      Choose a number from below, or type in your own value
  9120       1 / Owner gets FULL_CONTROL. No one else has access rights (default).
  9121         \ "private"
  9122      [snip]
  9123      acl>
  9124      The server-side encryption algorithm used when storing this object in S3.
  9125      Choose a number from below, or type in your own value
  9126       1 / None
  9127         \ ""
  9128       2 / AES256
  9129         \ "AES256"
  9130      server_side_encryption>
  9131      The storage class to use when storing objects in S3.
  9132      Choose a number from below, or type in your own value
  9133       1 / Default
  9134         \ ""
  9135       2 / Standard storage class
  9136         \ "STANDARD"
  9137       3 / Reduced redundancy storage class
  9138         \ "REDUCED_REDUNDANCY"
  9139       4 / Standard Infrequent Access storage class
  9140         \ "STANDARD_IA"
  9141      storage_class>
  9142      Remote config
  9143      --------------------
  9144      [wasabi]
  9145      env_auth = false
  9146      access_key_id = YOURACCESSKEY
  9147      secret_access_key = YOURSECRETACCESSKEY
  9148      region = us-east-1
  9149      endpoint = s3.wasabisys.com
  9150      location_constraint =
  9151      acl =
  9152      server_side_encryption =
  9153      storage_class =
  9154      --------------------
  9155      y) Yes this is OK
  9156      e) Edit this remote
  9157      d) Delete this remote
  9158      y/e/d> y
  9159  
  9160  This will leave the config file looking like this.
  9161  
  9162      [wasabi]
  9163      type = s3
  9164      provider = Wasabi
  9165      env_auth = false
  9166      access_key_id = YOURACCESSKEY
  9167      secret_access_key = YOURSECRETACCESSKEY
  9168      region =
  9169      endpoint = s3.wasabisys.com
  9170      location_constraint =
  9171      acl =
  9172      server_side_encryption =
  9173      storage_class =
  9174  
  9175  Alibaba OSS
  9176  
  9177  Here is an example of making an Alibaba Cloud (Aliyun) OSS
  9178  configuration. First run:
  9179  
  9180      rclone config
  9181  
  9182  This will guide you through an interactive setup process.
  9183  
  9184      No remotes found - make a new one
  9185      n) New remote
  9186      s) Set configuration password
  9187      q) Quit config
  9188      n/s/q> n
  9189      name> oss
  9190      Type of storage to configure.
  9191      Enter a string value. Press Enter for the default ("").
  9192      Choose a number from below, or type in your own value
  9193      [snip]
  9194       4 / Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, etc)
  9195         \ "s3"
  9196      [snip]
  9197      Storage> s3
  9198      Choose your S3 provider.
  9199      Enter a string value. Press Enter for the default ("").
  9200      Choose a number from below, or type in your own value
  9201       1 / Amazon Web Services (AWS) S3
  9202         \ "AWS"
  9203       2 / Alibaba Cloud Object Storage System (OSS) formerly Aliyun
  9204         \ "Alibaba"
  9205       3 / Ceph Object Storage
  9206         \ "Ceph"
  9207      [snip]
  9208      provider> Alibaba
  9209      Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
  9210      Only applies if access_key_id and secret_access_key is blank.
  9211      Enter a boolean value (true or false). Press Enter for the default ("false").
  9212      Choose a number from below, or type in your own value
  9213       1 / Enter AWS credentials in the next step
  9214         \ "false"
  9215       2 / Get AWS credentials from the environment (env vars or IAM)
  9216         \ "true"
  9217      env_auth> 1
  9218      AWS Access Key ID.
  9219      Leave blank for anonymous access or runtime credentials.
  9220      Enter a string value. Press Enter for the default ("").
  9221      access_key_id> accesskeyid
  9222      AWS Secret Access Key (password)
  9223      Leave blank for anonymous access or runtime credentials.
  9224      Enter a string value. Press Enter for the default ("").
  9225      secret_access_key> secretaccesskey
  9226      Endpoint for OSS API.
  9227      Enter a string value. Press Enter for the default ("").
  9228      Choose a number from below, or type in your own value
  9229       1 / East China 1 (Hangzhou)
  9230         \ "oss-cn-hangzhou.aliyuncs.com"
  9231       2 / East China 2 (Shanghai)
  9232         \ "oss-cn-shanghai.aliyuncs.com"
  9233       3 / North China 1 (Qingdao)
  9234         \ "oss-cn-qingdao.aliyuncs.com"
  9235      [snip]
  9236      endpoint> 1
  9237      Canned ACL used when creating buckets and storing or copying objects.
  9238  
  9239      Note that this ACL is applied when server side copying objects as S3
  9240      doesn't copy the ACL from the source but rather writes a fresh one.
  9241      Enter a string value. Press Enter for the default ("").
  9242      Choose a number from below, or type in your own value
  9243       1 / Owner gets FULL_CONTROL. No one else has access rights (default).
  9244         \ "private"
  9245       2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access.
  9246         \ "public-read"
  9247         / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access.
  9248      [snip]
  9249      acl> 1
  9250      The storage class to use when storing new objects in OSS.
  9251      Enter a string value. Press Enter for the default ("").
  9252      Choose a number from below, or type in your own value
  9253       1 / Default
  9254         \ ""
  9255       2 / Standard storage class
  9256         \ "STANDARD"
  9257       3 / Archive storage mode.
  9258         \ "GLACIER"
  9259       4 / Infrequent access storage mode.
  9260         \ "STANDARD_IA"
  9261      storage_class> 1
  9262      Edit advanced config? (y/n)
  9263      y) Yes
  9264      n) No
  9265      y/n> n
  9266      Remote config
  9267      --------------------
  9268      [oss]
  9269      type = s3
  9270      provider = Alibaba
  9271      env_auth = false
  9272      access_key_id = accesskeyid
  9273      secret_access_key = secretaccesskey
  9274      endpoint = oss-cn-hangzhou.aliyuncs.com
  9275      acl = private
  9276      storage_class = Standard
  9277      --------------------
  9278      y) Yes this is OK
  9279      e) Edit this remote
  9280      d) Delete this remote
  9281      y/e/d> y
  9282  
  9283  Netease NOS
  9284  
  9285  For Netease NOS configure as per the configurator rclone config setting
  9286  the provider Netease. This will automatically set
  9287  force_path_style = false which is necessary for it to run properly.
  9288  
  9289  
  9290  Backblaze B2
  9291  
  9292  B2 is Backblaze’s cloud storage system.
  9293  
  9294  Paths are specified as remote:bucket (or remote: for the lsd command.)
  9295  You may put subdirectories in too, eg remote:bucket/path/to/dir.
  9296  
  9297  Here is an example of making a b2 configuration. First run
  9298  
  9299      rclone config
  9300  
  9301  This will guide you through an interactive setup process. To
  9302  authenticate you will either need your Account ID (a short hex number)
  9303  and Master Application Key (a long hex number) OR an Application Key,
  9304  which is the recommended method. See below for further details on
  9305  generating and using an Application Key.
  9306  
  9307      No remotes found - make a new one
  9308      n) New remote
  9309      q) Quit config
  9310      n/q> n
  9311      name> remote
  9312      Type of storage to configure.
  9313      Choose a number from below, or type in your own value
  9314       1 / Amazon Drive
  9315         \ "amazon cloud drive"
  9316       2 / Amazon S3 (also Dreamhost, Ceph, Minio)
  9317         \ "s3"
  9318       3 / Backblaze B2
  9319         \ "b2"
  9320       4 / Dropbox
  9321         \ "dropbox"
  9322       5 / Encrypt/Decrypt a remote
  9323         \ "crypt"
  9324       6 / Google Cloud Storage (this is not Google Drive)
  9325         \ "google cloud storage"
  9326       7 / Google Drive
  9327         \ "drive"
  9328       8 / Hubic
  9329         \ "hubic"
  9330       9 / Local Disk
  9331         \ "local"
  9332      10 / Microsoft OneDrive
  9333         \ "onedrive"
  9334      11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
  9335         \ "swift"
  9336      12 / SSH/SFTP Connection
  9337         \ "sftp"
  9338      13 / Yandex Disk
  9339         \ "yandex"
  9340      Storage> 3
  9341      Account ID or Application Key ID
  9342      account> 123456789abc
  9343      Application Key
  9344      key> 0123456789abcdef0123456789abcdef0123456789
  9345      Endpoint for the service - leave blank normally.
  9346      endpoint>
  9347      Remote config
  9348      --------------------
  9349      [remote]
  9350      account = 123456789abc
  9351      key = 0123456789abcdef0123456789abcdef0123456789
  9352      endpoint =
  9353      --------------------
  9354      y) Yes this is OK
  9355      e) Edit this remote
  9356      d) Delete this remote
  9357      y/e/d> y
  9358  
  9359  This remote is called remote and can now be used like this
  9360  
  9361  See all buckets
  9362  
  9363      rclone lsd remote:
  9364  
  9365  Create a new bucket
  9366  
  9367      rclone mkdir remote:bucket
  9368  
  9369  List the contents of a bucket
  9370  
  9371      rclone ls remote:bucket
  9372  
  9373  Sync /home/local/directory to the remote bucket, deleting any excess
  9374  files in the bucket.
  9375  
  9376      rclone sync /home/local/directory remote:bucket
  9377  
  9378  Application Keys
  9379  
  9380  B2 supports multiple Application Keys for different access permission to
  9381  B2 Buckets.
  9382  
  9383  You can use these with rclone too; you will need to use rclone version
  9384  1.43 or later.
  9385  
  9386  Follow Backblaze’s docs to create an Application Key with the required
  9387  permission and add the applicationKeyId as the account and the
  9388  Application Key itself as the key.
  9389  
  9390  Note that you must put the _applicationKeyId_ as the account – you can’t
  9391  use the master Account ID. If you try then B2 will return 401 errors.
  9392  
  9393  –fast-list
  9394  
  9395  This remote supports --fast-list which allows you to use fewer
  9396  transactions in exchange for more memory. See the rclone docs for more
  9397  details.
  9398  
  9399  Modified time
  9400  
  9401  The modified time is stored as metadata on the object as
  9402  X-Bz-Info-src_last_modified_millis as milliseconds since 1970-01-01 in
  9403  the Backblaze standard. Other tools should be able to use this as a
  9404  modified time.
  9405  
  9406  Modified times are used in syncing and are fully supported. Note that if
  9407  a modification time needs to be updated on an object then it will create
  9408  a new version of the object.
  9409  
  9410  SHA1 checksums
  9411  
  9412  The SHA1 checksums of the files are checked on upload and download and
  9413  will be used in the syncing process.
  9414  
  9415  Large files (bigger than the limit in --b2-upload-cutoff) which are
  9416  uploaded in chunks will store their SHA1 on the object as
  9417  X-Bz-Info-large_file_sha1 as recommended by Backblaze.
  9418  
  9419  For a large file to be uploaded with an SHA1 checksum, the source needs
  9420  to support SHA1 checksums. The local disk supports SHA1 checksums so
  9421  large file transfers from local disk will have an SHA1. See the overview
  9422  for exactly which remotes support SHA1.
  9423  
  9424  Sources which don’t support SHA1, in particular crypt will upload large
  9425  files without SHA1 checksums. This may be fixed in the future (see
  9426  #1767).
  9427  
  9428  Files sizes below --b2-upload-cutoff will always have an SHA1 regardless
  9429  of the source.
  9430  
  9431  Transfers
  9432  
  9433  Backblaze recommends that you do lots of transfers simultaneously for
  9434  maximum speed. In tests from my SSD equipped laptop the optimum setting
  9435  is about --transfers 32 though higher numbers may be used for a slight
  9436  speed improvement. The optimum number for you may vary depending on your
  9437  hardware, how big the files are, how much you want to load your
  9438  computer, etc. The default of --transfers 4 is definitely too low for
  9439  Backblaze B2 though.
  9440  
  9441  Note that uploading big files (bigger than 200 MB by default) will use a
  9442  96 MB RAM buffer by default. There can be at most --transfers of these
  9443  in use at any moment, so this sets the upper limit on the memory used.
  9444  
  9445  Versions
  9446  
  9447  When rclone uploads a new version of a file it creates a new version of
  9448  it. Likewise when you delete a file, the old version will be marked
  9449  hidden and still be available. Conversely, you may opt in to a “hard
  9450  delete” of files with the --b2-hard-delete flag which would permanently
  9451  remove the file instead of hiding it.
  9452  
  9453  Old versions of files, where available, are visible using the
  9454  --b2-versions flag.
  9455  
  9456  NB Note that --b2-versions does not work with crypt at the moment #1627.
  9457  Using –backup-dir with rclone is the recommended way of working around
  9458  this.
  9459  
  9460  If you wish to remove all the old versions then you can use the
  9461  rclone cleanup remote:bucket command which will delete all the old
  9462  versions of files, leaving the current ones intact. You can also supply
  9463  a path and only old versions under that path will be deleted, eg
  9464  rclone cleanup remote:bucket/path/to/stuff.
  9465  
  9466  Note that cleanup will remove partially uploaded files from the bucket
  9467  if they are more than a day old.
  9468  
  9469  When you purge a bucket, the current and the old versions will be
  9470  deleted then the bucket will be deleted.
  9471  
  9472  However delete will cause the current versions of the files to become
  9473  hidden old versions.
  9474  
  9475  Here is a session showing the listing and retrieval of an old version
  9476  followed by a cleanup of the old versions.
  9477  
  9478  Show current version and all the versions with --b2-versions flag.
  9479  
  9480      $ rclone -q ls b2:cleanup-test
  9481              9 one.txt
  9482  
  9483      $ rclone -q --b2-versions ls b2:cleanup-test
  9484              9 one.txt
  9485              8 one-v2016-07-04-141032-000.txt
  9486             16 one-v2016-07-04-141003-000.txt
  9487             15 one-v2016-07-02-155621-000.txt
  9488  
  9489  Retrieve an old version
  9490  
  9491      $ rclone -q --b2-versions copy b2:cleanup-test/one-v2016-07-04-141003-000.txt /tmp
  9492  
  9493      $ ls -l /tmp/one-v2016-07-04-141003-000.txt
  9494      -rw-rw-r-- 1 ncw ncw 16 Jul  2 17:46 /tmp/one-v2016-07-04-141003-000.txt
  9495  
  9496  Clean up all the old versions and show that they’ve gone.
  9497  
  9498      $ rclone -q cleanup b2:cleanup-test
  9499  
  9500      $ rclone -q ls b2:cleanup-test
  9501              9 one.txt
  9502  
  9503      $ rclone -q --b2-versions ls b2:cleanup-test
  9504              9 one.txt
  9505  
  9506  Data usage
  9507  
  9508  It is useful to know how many requests are sent to the server in
  9509  different scenarios.
  9510  
  9511  All copy commands send the following 4 requests:
  9512  
  9513      /b2api/v1/b2_authorize_account
  9514      /b2api/v1/b2_create_bucket
  9515      /b2api/v1/b2_list_buckets
  9516      /b2api/v1/b2_list_file_names
  9517  
  9518  The b2_list_file_names request will be sent once for every 1k files in
  9519  the remote path, providing the checksum and modification time of the
  9520  listed files. As of version 1.33 issue #818 causes extra requests to be
  9521  sent when using B2 with Crypt. When a copy operation does not require
  9522  any files to be uploaded, no more requests will be sent.
  9523  
  9524  Uploading files that do not require chunking, will send 2 requests per
  9525  file upload:
  9526  
  9527      /b2api/v1/b2_get_upload_url
  9528      /b2api/v1/b2_upload_file/
  9529  
  9530  Uploading files requiring chunking, will send 2 requests (one each to
  9531  start and finish the upload) and another 2 requests for each chunk:
  9532  
  9533      /b2api/v1/b2_start_large_file
  9534      /b2api/v1/b2_get_upload_part_url
  9535      /b2api/v1/b2_upload_part/
  9536      /b2api/v1/b2_finish_large_file
  9537  
  9538  Versions
  9539  
  9540  Versions can be viewed with the --b2-versions flag. When it is set
  9541  rclone will show and act on older versions of files. For example
  9542  
  9543  Listing without --b2-versions
  9544  
  9545      $ rclone -q ls b2:cleanup-test
  9546              9 one.txt
  9547  
  9548  And with
  9549  
  9550      $ rclone -q --b2-versions ls b2:cleanup-test
  9551              9 one.txt
  9552              8 one-v2016-07-04-141032-000.txt
  9553             16 one-v2016-07-04-141003-000.txt
  9554             15 one-v2016-07-02-155621-000.txt
  9555  
  9556  Showing that the current version is unchanged but older versions can be
  9557  seen. These have the UTC date that they were uploaded to the server to
  9558  the nearest millisecond appended to them.
  9559  
  9560  Note that when using --b2-versions no file write operations are
  9561  permitted, so you can’t upload files or delete them.
  9562  
  9563  Standard Options
  9564  
  9565  Here are the standard options specific to b2 (Backblaze B2).
  9566  
  9567  –b2-account
  9568  
  9569  Account ID or Application Key ID
  9570  
  9571  -   Config: account
  9572  -   Env Var: RCLONE_B2_ACCOUNT
  9573  -   Type: string
  9574  -   Default: ""
  9575  
  9576  –b2-key
  9577  
  9578  Application Key
  9579  
  9580  -   Config: key
  9581  -   Env Var: RCLONE_B2_KEY
  9582  -   Type: string
  9583  -   Default: ""
  9584  
  9585  –b2-hard-delete
  9586  
  9587  Permanently delete files on remote removal, otherwise hide files.
  9588  
  9589  -   Config: hard_delete
  9590  -   Env Var: RCLONE_B2_HARD_DELETE
  9591  -   Type: bool
  9592  -   Default: false
  9593  
  9594  Advanced Options
  9595  
  9596  Here are the advanced options specific to b2 (Backblaze B2).
  9597  
  9598  –b2-endpoint
  9599  
  9600  Endpoint for the service. Leave blank normally.
  9601  
  9602  -   Config: endpoint
  9603  -   Env Var: RCLONE_B2_ENDPOINT
  9604  -   Type: string
  9605  -   Default: ""
  9606  
  9607  –b2-test-mode
  9608  
  9609  A flag string for X-Bz-Test-Mode header for debugging.
  9610  
  9611  This is for debugging purposes only. Setting it to one of the strings
  9612  below will cause b2 to return specific errors:
  9613  
  9614  -   “fail_some_uploads”
  9615  -   “expire_some_account_authorization_tokens”
  9616  -   “force_cap_exceeded”
  9617  
  9618  These will be set in the “X-Bz-Test-Mode” header which is documented in
  9619  the b2 integrations checklist.
  9620  
  9621  -   Config: test_mode
  9622  -   Env Var: RCLONE_B2_TEST_MODE
  9623  -   Type: string
  9624  -   Default: ""
  9625  
  9626  –b2-versions
  9627  
  9628  Include old versions in directory listings. Note that when using this no
  9629  file write operations are permitted, so you can’t upload files or delete
  9630  them.
  9631  
  9632  -   Config: versions
  9633  -   Env Var: RCLONE_B2_VERSIONS
  9634  -   Type: bool
  9635  -   Default: false
  9636  
  9637  –b2-upload-cutoff
  9638  
  9639  Cutoff for switching to chunked upload.
  9640  
  9641  Files above this size will be uploaded in chunks of “–b2-chunk-size”.
  9642  
  9643  This value should be set no larger than 4.657GiB (== 5GB).
  9644  
  9645  -   Config: upload_cutoff
  9646  -   Env Var: RCLONE_B2_UPLOAD_CUTOFF
  9647  -   Type: SizeSuffix
  9648  -   Default: 200M
  9649  
  9650  –b2-chunk-size
  9651  
  9652  Upload chunk size. Must fit in memory.
  9653  
  9654  When uploading large files, chunk the file into this size. Note that
  9655  these chunks are buffered in memory and there might a maximum of
  9656  “–transfers” chunks in progress at once. 5,000,000 Bytes is the minimum
  9657  size.
  9658  
  9659  -   Config: chunk_size
  9660  -   Env Var: RCLONE_B2_CHUNK_SIZE
  9661  -   Type: SizeSuffix
  9662  -   Default: 96M
  9663  
  9664  –b2-disable-checksum
  9665  
  9666  Disable checksums for large (> upload cutoff) files
  9667  
  9668  -   Config: disable_checksum
  9669  -   Env Var: RCLONE_B2_DISABLE_CHECKSUM
  9670  -   Type: bool
  9671  -   Default: false
  9672  
  9673  –b2-download-url
  9674  
  9675  Custom endpoint for downloads.
  9676  
  9677  This is usually set to a Cloudflare CDN URL as Backblaze offers free
  9678  egress for data downloaded through the Cloudflare network. Leave blank
  9679  if you want to use the endpoint provided by Backblaze.
  9680  
  9681  -   Config: download_url
  9682  -   Env Var: RCLONE_B2_DOWNLOAD_URL
  9683  -   Type: string
  9684  -   Default: ""
  9685  
  9686  
  9687  Box
  9688  
  9689  Paths are specified as remote:path
  9690  
  9691  Paths may be as deep as required, eg remote:directory/subdirectory.
  9692  
  9693  The initial setup for Box involves getting a token from Box which you
  9694  need to do in your browser. rclone config walks you through it.
  9695  
  9696  Here is an example of how to make a remote called remote. First run:
  9697  
  9698       rclone config
  9699  
  9700  This will guide you through an interactive setup process:
  9701  
  9702      No remotes found - make a new one
  9703      n) New remote
  9704      s) Set configuration password
  9705      q) Quit config
  9706      n/s/q> n
  9707      name> remote
  9708      Type of storage to configure.
  9709      Choose a number from below, or type in your own value
  9710       1 / Amazon Drive
  9711         \ "amazon cloud drive"
  9712       2 / Amazon S3 (also Dreamhost, Ceph, Minio)
  9713         \ "s3"
  9714       3 / Backblaze B2
  9715         \ "b2"
  9716       4 / Box
  9717         \ "box"
  9718       5 / Dropbox
  9719         \ "dropbox"
  9720       6 / Encrypt/Decrypt a remote
  9721         \ "crypt"
  9722       7 / FTP Connection
  9723         \ "ftp"
  9724       8 / Google Cloud Storage (this is not Google Drive)
  9725         \ "google cloud storage"
  9726       9 / Google Drive
  9727         \ "drive"
  9728      10 / Hubic
  9729         \ "hubic"
  9730      11 / Local Disk
  9731         \ "local"
  9732      12 / Microsoft OneDrive
  9733         \ "onedrive"
  9734      13 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
  9735         \ "swift"
  9736      14 / SSH/SFTP Connection
  9737         \ "sftp"
  9738      15 / Yandex Disk
  9739         \ "yandex"
  9740      16 / http Connection
  9741         \ "http"
  9742      Storage> box
  9743      Box App Client Id - leave blank normally.
  9744      client_id> 
  9745      Box App Client Secret - leave blank normally.
  9746      client_secret> 
  9747      Remote config
  9748      Use auto config?
  9749       * Say Y if not sure
  9750       * Say N if you are working on a remote or headless machine
  9751      y) Yes
  9752      n) No
  9753      y/n> y
  9754      If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
  9755      Log in and authorize rclone for access
  9756      Waiting for code...
  9757      Got code
  9758      --------------------
  9759      [remote]
  9760      client_id = 
  9761      client_secret = 
  9762      token = {"access_token":"XXX","token_type":"bearer","refresh_token":"XXX","expiry":"XXX"}
  9763      --------------------
  9764      y) Yes this is OK
  9765      e) Edit this remote
  9766      d) Delete this remote
  9767      y/e/d> y
  9768  
  9769  See the remote setup docs for how to set it up on a machine with no
  9770  Internet browser available.
  9771  
  9772  Note that rclone runs a webserver on your local machine to collect the
  9773  token as returned from Box. This only runs from the moment it opens your
  9774  browser to the moment you get back the verification code. This is on
  9775  http://127.0.0.1:53682/ and this it may require you to unblock it
  9776  temporarily if you are running a host firewall.
  9777  
  9778  Once configured you can then use rclone like this,
  9779  
  9780  List directories in top level of your Box
  9781  
  9782      rclone lsd remote:
  9783  
  9784  List all the files in your Box
  9785  
  9786      rclone ls remote:
  9787  
  9788  To copy a local directory to an Box directory called backup
  9789  
  9790      rclone copy /home/source remote:backup
  9791  
  9792  Using rclone with an Enterprise account with SSO
  9793  
  9794  If you have an “Enterprise” account type with Box with single sign on
  9795  (SSO), you need to create a password to use Box with rclone. This can be
  9796  done at your Enterprise Box account by going to Settings, “Account” Tab,
  9797  and then set the password in the “Authentication” field.
  9798  
  9799  Once you have done this, you can setup your Enterprise Box account using
  9800  the same procedure detailed above in the, using the password you have
  9801  just set.
  9802  
  9803  Invalid refresh token
  9804  
  9805  According to the box docs:
  9806  
  9807    Each refresh_token is valid for one use in 60 days.
  9808  
  9809  This means that if you
  9810  
  9811  -   Don’t use the box remote for 60 days
  9812  -   Copy the config file with a box refresh token in and use it in two
  9813      places
  9814  -   Get an error on a token refresh
  9815  
  9816  then rclone will return an error which includes the text
  9817  Invalid refresh token.
  9818  
  9819  To fix this you will need to use oauth2 again to update the refresh
  9820  token. You can use the methods in the remote setup docs, bearing in mind
  9821  that if you use the copy the config file method, you should not use that
  9822  remote on the computer you did the authentication on.
  9823  
  9824  Here is how to do it.
  9825  
  9826      $ rclone config
  9827      Current remotes:
  9828  
  9829      Name                 Type
  9830      ====                 ====
  9831      remote               box
  9832  
  9833      e) Edit existing remote
  9834      n) New remote
  9835      d) Delete remote
  9836      r) Rename remote
  9837      c) Copy remote
  9838      s) Set configuration password
  9839      q) Quit config
  9840      e/n/d/r/c/s/q> e
  9841      Choose a number from below, or type in an existing value
  9842       1 > remote
  9843      remote> remote
  9844      --------------------
  9845      [remote]
  9846      type = box
  9847      token = {"access_token":"XXX","token_type":"bearer","refresh_token":"XXX","expiry":"2017-07-08T23:40:08.059167677+01:00"}
  9848      --------------------
  9849      Edit remote
  9850      Value "client_id" = ""
  9851      Edit? (y/n)>
  9852      y) Yes
  9853      n) No
  9854      y/n> n
  9855      Value "client_secret" = ""
  9856      Edit? (y/n)>
  9857      y) Yes
  9858      n) No
  9859      y/n> n
  9860      Remote config
  9861      Already have a token - refresh?
  9862      y) Yes
  9863      n) No
  9864      y/n> y
  9865      Use auto config?
  9866       * Say Y if not sure
  9867       * Say N if you are working on a remote or headless machine
  9868      y) Yes
  9869      n) No
  9870      y/n> y
  9871      If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
  9872      Log in and authorize rclone for access
  9873      Waiting for code...
  9874      Got code
  9875      --------------------
  9876      [remote]
  9877      type = box
  9878      token = {"access_token":"YYY","token_type":"bearer","refresh_token":"YYY","expiry":"2017-07-23T12:22:29.259137901+01:00"}
  9879      --------------------
  9880      y) Yes this is OK
  9881      e) Edit this remote
  9882      d) Delete this remote
  9883      y/e/d> y
  9884  
  9885  Modified time and hashes
  9886  
  9887  Box allows modification times to be set on objects accurate to 1 second.
  9888  These will be used to detect whether objects need syncing or not.
  9889  
  9890  Box supports SHA1 type hashes, so you can use the --checksum flag.
  9891  
  9892  Transfers
  9893  
  9894  For files above 50MB rclone will use a chunked transfer. Rclone will
  9895  upload up to --transfers chunks at the same time (shared among all the
  9896  multipart uploads). Chunks are buffered in memory and are normally 8MB
  9897  so increasing --transfers will increase memory use.
  9898  
  9899  Deleting files
  9900  
  9901  Depending on the enterprise settings for your user, the item will either
  9902  be actually deleted from Box or moved to the trash.
  9903  
  9904  Standard Options
  9905  
  9906  Here are the standard options specific to box (Box).
  9907  
  9908  –box-client-id
  9909  
  9910  Box App Client Id. Leave blank normally.
  9911  
  9912  -   Config: client_id
  9913  -   Env Var: RCLONE_BOX_CLIENT_ID
  9914  -   Type: string
  9915  -   Default: ""
  9916  
  9917  –box-client-secret
  9918  
  9919  Box App Client Secret Leave blank normally.
  9920  
  9921  -   Config: client_secret
  9922  -   Env Var: RCLONE_BOX_CLIENT_SECRET
  9923  -   Type: string
  9924  -   Default: ""
  9925  
  9926  Advanced Options
  9927  
  9928  Here are the advanced options specific to box (Box).
  9929  
  9930  –box-upload-cutoff
  9931  
  9932  Cutoff for switching to multipart upload (>= 50MB).
  9933  
  9934  -   Config: upload_cutoff
  9935  -   Env Var: RCLONE_BOX_UPLOAD_CUTOFF
  9936  -   Type: SizeSuffix
  9937  -   Default: 50M
  9938  
  9939  –box-commit-retries
  9940  
  9941  Max number of times to try committing a multipart file.
  9942  
  9943  -   Config: commit_retries
  9944  -   Env Var: RCLONE_BOX_COMMIT_RETRIES
  9945  -   Type: int
  9946  -   Default: 100
  9947  
  9948  Limitations
  9949  
  9950  Note that Box is case insensitive so you can’t have a file called
  9951  “Hello.doc” and one called “hello.doc”.
  9952  
  9953  Box file names can’t have the \ character in. rclone maps this to and
  9954  from an identical looking unicode equivalent \.
  9955  
  9956  Box only supports filenames up to 255 characters in length.
  9957  
  9958  
  9959  Cache (BETA)
  9960  
  9961  The cache remote wraps another existing remote and stores file structure
  9962  and its data for long running tasks like rclone mount.
  9963  
  9964  To get started you just need to have an existing remote which can be
  9965  configured with cache.
  9966  
  9967  Here is an example of how to make a remote called test-cache. First run:
  9968  
  9969       rclone config
  9970  
  9971  This will guide you through an interactive setup process:
  9972  
  9973      No remotes found - make a new one
  9974      n) New remote
  9975      r) Rename remote
  9976      c) Copy remote
  9977      s) Set configuration password
  9978      q) Quit config
  9979      n/r/c/s/q> n
  9980      name> test-cache
  9981      Type of storage to configure.
  9982      Choose a number from below, or type in your own value
  9983      ...
  9984       5 / Cache a remote
  9985         \ "cache"
  9986      ...
  9987      Storage> 5
  9988      Remote to cache.
  9989      Normally should contain a ':' and a path, eg "myremote:path/to/dir",
  9990      "myremote:bucket" or maybe "myremote:" (not recommended).
  9991      remote> local:/test
  9992      Optional: The URL of the Plex server
  9993      plex_url> http://127.0.0.1:32400
  9994      Optional: The username of the Plex user
  9995      plex_username> dummyusername
  9996      Optional: The password of the Plex user
  9997      y) Yes type in my own password
  9998      g) Generate random password
  9999      n) No leave this optional password blank
 10000      y/g/n> y
 10001      Enter the password:
 10002      password:
 10003      Confirm the password:
 10004      password:
 10005      The size of a chunk. Lower value good for slow connections but can affect seamless reading.
 10006      Default: 5M
 10007      Choose a number from below, or type in your own value
 10008       1 / 1MB
 10009         \ "1m"
 10010       2 / 5 MB
 10011         \ "5M"
 10012       3 / 10 MB
 10013         \ "10M"
 10014      chunk_size> 2
 10015      How much time should object info (file size, file hashes etc) be stored in cache. Use a very high value if you don't plan on changing the source FS from outside the cache.
 10016      Accepted units are: "s", "m", "h".
 10017      Default: 5m
 10018      Choose a number from below, or type in your own value
 10019       1 / 1 hour
 10020         \ "1h"
 10021       2 / 24 hours
 10022         \ "24h"
 10023       3 / 24 hours
 10024         \ "48h"
 10025      info_age> 2
 10026      The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted.
 10027      Default: 10G
 10028      Choose a number from below, or type in your own value
 10029       1 / 500 MB
 10030         \ "500M"
 10031       2 / 1 GB
 10032         \ "1G"
 10033       3 / 10 GB
 10034         \ "10G"
 10035      chunk_total_size> 3
 10036      Remote config
 10037      --------------------
 10038      [test-cache]
 10039      remote = local:/test
 10040      plex_url = http://127.0.0.1:32400
 10041      plex_username = dummyusername
 10042      plex_password = *** ENCRYPTED ***
 10043      chunk_size = 5M
 10044      info_age = 48h
 10045      chunk_total_size = 10G
 10046  
 10047  You can then use it like this,
 10048  
 10049  List directories in top level of your drive
 10050  
 10051      rclone lsd test-cache:
 10052  
 10053  List all the files in your drive
 10054  
 10055      rclone ls test-cache:
 10056  
 10057  To start a cached mount
 10058  
 10059      rclone mount --allow-other test-cache: /var/tmp/test-cache
 10060  
 10061  Write Features
 10062  
 10063  Offline uploading
 10064  
 10065  In an effort to make writing through cache more reliable, the backend
 10066  now supports this feature which can be activated by specifying a
 10067  cache-tmp-upload-path.
 10068  
 10069  A files goes through these states when using this feature:
 10070  
 10071  1.  An upload is started (usually by copying a file on the cache remote)
 10072  2.  When the copy to the temporary location is complete the file is part
 10073      of the cached remote and looks and behaves like any other file
 10074      (reading included)
 10075  3.  After cache-tmp-wait-time passes and the file is next in line,
 10076      rclone move is used to move the file to the cloud provider
 10077  4.  Reading the file still works during the upload but most
 10078      modifications on it will be prohibited
 10079  5.  Once the move is complete the file is unlocked for modifications as
 10080      it becomes as any other regular file
 10081  6.  If the file is being read through cache when it’s actually deleted
 10082      from the temporary path then cache will simply swap the source to
 10083      the cloud provider without interrupting the reading (small blip can
 10084      happen though)
 10085  
 10086  Files are uploaded in sequence and only one file is uploaded at a time.
 10087  Uploads will be stored in a queue and be processed based on the order
 10088  they were added. The queue and the temporary storage is persistent
 10089  across restarts but can be cleared on startup with the --cache-db-purge
 10090  flag.
 10091  
 10092  Write Support
 10093  
 10094  Writes are supported through cache. One caveat is that a mounted cache
 10095  remote does not add any retry or fallback mechanism to the upload
 10096  operation. This will depend on the implementation of the wrapped remote.
 10097  Consider using Offline uploading for reliable writes.
 10098  
 10099  One special case is covered with cache-writes which will cache the file
 10100  data at the same time as the upload when it is enabled making it
 10101  available from the cache store immediately once the upload is finished.
 10102  
 10103  Read Features
 10104  
 10105  Multiple connections
 10106  
 10107  To counter the high latency between a local PC where rclone is running
 10108  and cloud providers, the cache remote can split multiple requests to the
 10109  cloud provider for smaller file chunks and combines them together
 10110  locally where they can be available almost immediately before the reader
 10111  usually needs them.
 10112  
 10113  This is similar to buffering when media files are played online. Rclone
 10114  will stay around the current marker but always try its best to stay
 10115  ahead and prepare the data before.
 10116  
 10117  Plex Integration
 10118  
 10119  There is a direct integration with Plex which allows cache to detect
 10120  during reading if the file is in playback or not. This helps cache to
 10121  adapt how it queries the cloud provider depending on what is needed for.
 10122  
 10123  Scans will have a minimum amount of workers (1) while in a confirmed
 10124  playback cache will deploy the configured number of workers.
 10125  
 10126  This integration opens the doorway to additional performance
 10127  improvements which will be explored in the near future.
 10128  
 10129  NOTE: If Plex options are not configured, cache will function with its
 10130  configured options without adapting any of its settings.
 10131  
 10132  How to enable? Run rclone config and add all the Plex options (endpoint,
 10133  username and password) in your remote and it will be automatically
 10134  enabled.
 10135  
 10136  Affected settings: - cache-workers: _Configured value_ during confirmed
 10137  playback or _1_ all the other times
 10138  
 10139  Certificate Validation
 10140  
 10141  When the Plex server is configured to only accept secure connections, it
 10142  is possible to use .plex.direct URL’s to ensure certificate validation
 10143  succeeds. These URL’s are used by Plex internally to connect to the Plex
 10144  server securely.
 10145  
 10146  The format for this URL’s is the following:
 10147  
 10148  https://ip-with-dots-replaced.server-hash.plex.direct:32400/
 10149  
 10150  The ip-with-dots-replaced part can be any IPv4 address, where the dots
 10151  have been replaced with dashes, e.g. 127.0.0.1 becomes 127-0-0-1.
 10152  
 10153  To get the server-hash part, the easiest way is to visit
 10154  
 10155  https://plex.tv/api/resources?includeHttps=1&X-Plex-Token=your-plex-token
 10156  
 10157  This page will list all the available Plex servers for your account with
 10158  at least one .plex.direct link for each. Copy one URL and replace the IP
 10159  address with the desired address. This can be used as the plex_url
 10160  value.
 10161  
 10162  Known issues
 10163  
 10164  Mount and –dir-cache-time
 10165  
 10166  –dir-cache-time controls the first layer of directory caching which
 10167  works at the mount layer. Being an independent caching mechanism from
 10168  the cache backend, it will manage its own entries based on the
 10169  configured time.
 10170  
 10171  To avoid getting in a scenario where dir cache has obsolete data and
 10172  cache would have the correct one, try to set --dir-cache-time to a lower
 10173  time than --cache-info-age. Default values are already configured in
 10174  this way.
 10175  
 10176  Windows support - Experimental
 10177  
 10178  There are a couple of issues with Windows mount functionality that still
 10179  require some investigations. It should be considered as experimental
 10180  thus far as fixes come in for this OS.
 10181  
 10182  Most of the issues seem to be related to the difference between
 10183  filesystems on Linux flavors and Windows as cache is heavily dependant
 10184  on them.
 10185  
 10186  Any reports or feedback on how cache behaves on this OS is greatly
 10187  appreciated.
 10188  
 10189  -   https://github.com/ncw/rclone/issues/1935
 10190  -   https://github.com/ncw/rclone/issues/1907
 10191  -   https://github.com/ncw/rclone/issues/1834
 10192  
 10193  Risk of throttling
 10194  
 10195  Future iterations of the cache backend will make use of the pooling
 10196  functionality of the cloud provider to synchronize and at the same time
 10197  make writing through it more tolerant to failures.
 10198  
 10199  There are a couple of enhancements in track to add these but in the
 10200  meantime there is a valid concern that the expiring cache listings can
 10201  lead to cloud provider throttles or bans due to repeated queries on it
 10202  for very large mounts.
 10203  
 10204  Some recommendations: - don’t use a very small interval for entry
 10205  informations (--cache-info-age) - while writes aren’t yet optimised, you
 10206  can still write through cache which gives you the advantage of adding
 10207  the file in the cache at the same time if configured to do so.
 10208  
 10209  Future enhancements:
 10210  
 10211  -   https://github.com/ncw/rclone/issues/1937
 10212  -   https://github.com/ncw/rclone/issues/1936
 10213  
 10214  cache and crypt
 10215  
 10216  One common scenario is to keep your data encrypted in the cloud provider
 10217  using the crypt remote. crypt uses a similar technique to wrap around an
 10218  existing remote and handles this translation in a seamless way.
 10219  
 10220  There is an issue with wrapping the remotes in this order: CLOUD REMOTE
 10221  -> CRYPT -> CACHE
 10222  
 10223  During testing, I experienced a lot of bans with the remotes in this
 10224  order. I suspect it might be related to how crypt opens files on the
 10225  cloud provider which makes it think we’re downloading the full file
 10226  instead of small chunks. Organizing the remotes in this order yields
 10227  better results: CLOUD REMOTE -> CACHE -> CRYPT
 10228  
 10229  absolute remote paths
 10230  
 10231  cache can not differentiate between relative and absolute paths for the
 10232  wrapped remote. Any path given in the remote config setting and on the
 10233  command line will be passed to the wrapped remote as is, but for storing
 10234  the chunks on disk the path will be made relative by removing any
 10235  leading / character.
 10236  
 10237  This behavior is irrelevant for most backend types, but there are
 10238  backends where a leading / changes the effective directory, e.g. in the
 10239  sftp backend paths starting with a / are relative to the root of the SSH
 10240  server and paths without are relative to the user home directory. As a
 10241  result sftp:bin and sftp:/bin will share the same cache folder, even if
 10242  they represent a different directory on the SSH server.
 10243  
 10244  Cache and Remote Control (–rc)
 10245  
 10246  Cache supports the new --rc mode in rclone and can be remote controlled
 10247  through the following end points: By default, the listener is disabled
 10248  if you do not add the flag.
 10249  
 10250  rc cache/expire
 10251  
 10252  Purge a remote from the cache backend. Supports either a directory or a
 10253  file. It supports both encrypted and unencrypted file names if cache is
 10254  wrapped by crypt.
 10255  
 10256  Params: - REMOTE = path to remote (REQUIRED) - WITHDATA = true/false to
 10257  delete cached data (chunks) as well _(optional, false by default)_
 10258  
 10259  Standard Options
 10260  
 10261  Here are the standard options specific to cache (Cache a remote).
 10262  
 10263  –cache-remote
 10264  
 10265  Remote to cache. Normally should contain a ‘:’ and a path, eg
 10266  “myremote:path/to/dir”, “myremote:bucket” or maybe “myremote:” (not
 10267  recommended).
 10268  
 10269  -   Config: remote
 10270  -   Env Var: RCLONE_CACHE_REMOTE
 10271  -   Type: string
 10272  -   Default: ""
 10273  
 10274  –cache-plex-url
 10275  
 10276  The URL of the Plex server
 10277  
 10278  -   Config: plex_url
 10279  -   Env Var: RCLONE_CACHE_PLEX_URL
 10280  -   Type: string
 10281  -   Default: ""
 10282  
 10283  –cache-plex-username
 10284  
 10285  The username of the Plex user
 10286  
 10287  -   Config: plex_username
 10288  -   Env Var: RCLONE_CACHE_PLEX_USERNAME
 10289  -   Type: string
 10290  -   Default: ""
 10291  
 10292  –cache-plex-password
 10293  
 10294  The password of the Plex user
 10295  
 10296  -   Config: plex_password
 10297  -   Env Var: RCLONE_CACHE_PLEX_PASSWORD
 10298  -   Type: string
 10299  -   Default: ""
 10300  
 10301  –cache-chunk-size
 10302  
 10303  The size of a chunk (partial file data).
 10304  
 10305  Use lower numbers for slower connections. If the chunk size is changed,
 10306  any downloaded chunks will be invalid and cache-chunk-path will need to
 10307  be cleared or unexpected EOF errors will occur.
 10308  
 10309  -   Config: chunk_size
 10310  -   Env Var: RCLONE_CACHE_CHUNK_SIZE
 10311  -   Type: SizeSuffix
 10312  -   Default: 5M
 10313  -   Examples:
 10314      -   “1m”
 10315          -   1MB
 10316      -   “5M”
 10317          -   5 MB
 10318      -   “10M”
 10319          -   10 MB
 10320  
 10321  –cache-info-age
 10322  
 10323  How long to cache file structure information (directory listings, file
 10324  size, times etc). If all write operations are done through the cache
 10325  then you can safely make this value very large as the cache store will
 10326  also be updated in real time.
 10327  
 10328  -   Config: info_age
 10329  -   Env Var: RCLONE_CACHE_INFO_AGE
 10330  -   Type: Duration
 10331  -   Default: 6h0m0s
 10332  -   Examples:
 10333      -   “1h”
 10334          -   1 hour
 10335      -   “24h”
 10336          -   24 hours
 10337      -   “48h”
 10338          -   48 hours
 10339  
 10340  –cache-chunk-total-size
 10341  
 10342  The total size that the chunks can take up on the local disk.
 10343  
 10344  If the cache exceeds this value then it will start to delete the oldest
 10345  chunks until it goes under this value.
 10346  
 10347  -   Config: chunk_total_size
 10348  -   Env Var: RCLONE_CACHE_CHUNK_TOTAL_SIZE
 10349  -   Type: SizeSuffix
 10350  -   Default: 10G
 10351  -   Examples:
 10352      -   “500M”
 10353          -   500 MB
 10354      -   “1G”
 10355          -   1 GB
 10356      -   “10G”
 10357          -   10 GB
 10358  
 10359  Advanced Options
 10360  
 10361  Here are the advanced options specific to cache (Cache a remote).
 10362  
 10363  –cache-plex-token
 10364  
 10365  The plex token for authentication - auto set normally
 10366  
 10367  -   Config: plex_token
 10368  -   Env Var: RCLONE_CACHE_PLEX_TOKEN
 10369  -   Type: string
 10370  -   Default: ""
 10371  
 10372  –cache-plex-insecure
 10373  
 10374  Skip all certificate verifications when connecting to the Plex server
 10375  
 10376  -   Config: plex_insecure
 10377  -   Env Var: RCLONE_CACHE_PLEX_INSECURE
 10378  -   Type: string
 10379  -   Default: ""
 10380  
 10381  –cache-db-path
 10382  
 10383  Directory to store file structure metadata DB. The remote name is used
 10384  as the DB file name.
 10385  
 10386  -   Config: db_path
 10387  -   Env Var: RCLONE_CACHE_DB_PATH
 10388  -   Type: string
 10389  -   Default: “$HOME/.cache/rclone/cache-backend”
 10390  
 10391  –cache-chunk-path
 10392  
 10393  Directory to cache chunk files.
 10394  
 10395  Path to where partial file data (chunks) are stored locally. The remote
 10396  name is appended to the final path.
 10397  
 10398  This config follows the “–cache-db-path”. If you specify a custom
 10399  location for “–cache-db-path” and don’t specify one for
 10400  “–cache-chunk-path” then “–cache-chunk-path” will use the same path as
 10401  “–cache-db-path”.
 10402  
 10403  -   Config: chunk_path
 10404  -   Env Var: RCLONE_CACHE_CHUNK_PATH
 10405  -   Type: string
 10406  -   Default: “$HOME/.cache/rclone/cache-backend”
 10407  
 10408  –cache-db-purge
 10409  
 10410  Clear all the cached data for this remote on start.
 10411  
 10412  -   Config: db_purge
 10413  -   Env Var: RCLONE_CACHE_DB_PURGE
 10414  -   Type: bool
 10415  -   Default: false
 10416  
 10417  –cache-chunk-clean-interval
 10418  
 10419  How often should the cache perform cleanups of the chunk storage. The
 10420  default value should be ok for most people. If you find that the cache
 10421  goes over “cache-chunk-total-size” too often then try to lower this
 10422  value to force it to perform cleanups more often.
 10423  
 10424  -   Config: chunk_clean_interval
 10425  -   Env Var: RCLONE_CACHE_CHUNK_CLEAN_INTERVAL
 10426  -   Type: Duration
 10427  -   Default: 1m0s
 10428  
 10429  –cache-read-retries
 10430  
 10431  How many times to retry a read from a cache storage.
 10432  
 10433  Since reading from a cache stream is independent from downloading file
 10434  data, readers can get to a point where there’s no more data in the
 10435  cache. Most of the times this can indicate a connectivity issue if cache
 10436  isn’t able to provide file data anymore.
 10437  
 10438  For really slow connections, increase this to a point where the stream
 10439  is able to provide data but your experience will be very stuttering.
 10440  
 10441  -   Config: read_retries
 10442  -   Env Var: RCLONE_CACHE_READ_RETRIES
 10443  -   Type: int
 10444  -   Default: 10
 10445  
 10446  –cache-workers
 10447  
 10448  How many workers should run in parallel to download chunks.
 10449  
 10450  Higher values will mean more parallel processing (better CPU needed) and
 10451  more concurrent requests on the cloud provider. This impacts several
 10452  aspects like the cloud provider API limits, more stress on the hardware
 10453  that rclone runs on but it also means that streams will be more fluid
 10454  and data will be available much more faster to readers.
 10455  
 10456  NOTE: If the optional Plex integration is enabled then this setting will
 10457  adapt to the type of reading performed and the value specified here will
 10458  be used as a maximum number of workers to use.
 10459  
 10460  -   Config: workers
 10461  -   Env Var: RCLONE_CACHE_WORKERS
 10462  -   Type: int
 10463  -   Default: 4
 10464  
 10465  –cache-chunk-no-memory
 10466  
 10467  Disable the in-memory cache for storing chunks during streaming.
 10468  
 10469  By default, cache will keep file data during streaming in RAM as well to
 10470  provide it to readers as fast as possible.
 10471  
 10472  This transient data is evicted as soon as it is read and the number of
 10473  chunks stored doesn’t exceed the number of workers. However, depending
 10474  on other settings like “cache-chunk-size” and “cache-workers” this
 10475  footprint can increase if there are parallel streams too (multiple files
 10476  being read at the same time).
 10477  
 10478  If the hardware permits it, use this feature to provide an overall
 10479  better performance during streaming but it can also be disabled if RAM
 10480  is not available on the local machine.
 10481  
 10482  -   Config: chunk_no_memory
 10483  -   Env Var: RCLONE_CACHE_CHUNK_NO_MEMORY
 10484  -   Type: bool
 10485  -   Default: false
 10486  
 10487  –cache-rps
 10488  
 10489  Limits the number of requests per second to the source FS (-1 to
 10490  disable)
 10491  
 10492  This setting places a hard limit on the number of requests per second
 10493  that cache will be doing to the cloud provider remote and try to respect
 10494  that value by setting waits between reads.
 10495  
 10496  If you find that you’re getting banned or limited on the cloud provider
 10497  through cache and know that a smaller number of requests per second will
 10498  allow you to work with it then you can use this setting for that.
 10499  
 10500  A good balance of all the other settings should make this setting
 10501  useless but it is available to set for more special cases.
 10502  
 10503  NOTE: This will limit the number of requests during streams but other
 10504  API calls to the cloud provider like directory listings will still pass.
 10505  
 10506  -   Config: rps
 10507  -   Env Var: RCLONE_CACHE_RPS
 10508  -   Type: int
 10509  -   Default: -1
 10510  
 10511  –cache-writes
 10512  
 10513  Cache file data on writes through the FS
 10514  
 10515  If you need to read files immediately after you upload them through
 10516  cache you can enable this flag to have their data stored in the cache
 10517  store at the same time during upload.
 10518  
 10519  -   Config: writes
 10520  -   Env Var: RCLONE_CACHE_WRITES
 10521  -   Type: bool
 10522  -   Default: false
 10523  
 10524  –cache-tmp-upload-path
 10525  
 10526  Directory to keep temporary files until they are uploaded.
 10527  
 10528  This is the path where cache will use as a temporary storage for new
 10529  files that need to be uploaded to the cloud provider.
 10530  
 10531  Specifying a value will enable this feature. Without it, it is
 10532  completely disabled and files will be uploaded directly to the cloud
 10533  provider
 10534  
 10535  -   Config: tmp_upload_path
 10536  -   Env Var: RCLONE_CACHE_TMP_UPLOAD_PATH
 10537  -   Type: string
 10538  -   Default: ""
 10539  
 10540  –cache-tmp-wait-time
 10541  
 10542  How long should files be stored in local cache before being uploaded
 10543  
 10544  This is the duration that a file must wait in the temporary location
 10545  _cache-tmp-upload-path_ before it is selected for upload.
 10546  
 10547  Note that only one file is uploaded at a time and it can take longer to
 10548  start the upload if a queue formed for this purpose.
 10549  
 10550  -   Config: tmp_wait_time
 10551  -   Env Var: RCLONE_CACHE_TMP_WAIT_TIME
 10552  -   Type: Duration
 10553  -   Default: 15s
 10554  
 10555  –cache-db-wait-time
 10556  
 10557  How long to wait for the DB to be available - 0 is unlimited
 10558  
 10559  Only one process can have the DB open at any one time, so rclone waits
 10560  for this duration for the DB to become available before it gives an
 10561  error.
 10562  
 10563  If you set it to 0 then it will wait forever.
 10564  
 10565  -   Config: db_wait_time
 10566  -   Env Var: RCLONE_CACHE_DB_WAIT_TIME
 10567  -   Type: Duration
 10568  -   Default: 1s
 10569  
 10570  
 10571  Crypt
 10572  
 10573  The crypt remote encrypts and decrypts another remote.
 10574  
 10575  To use it first set up the underlying remote following the config
 10576  instructions for that remote. You can also use a local pathname instead
 10577  of a remote which will encrypt and decrypt from that directory which
 10578  might be useful for encrypting onto a USB stick for example.
 10579  
 10580  First check your chosen remote is working - we’ll call it remote:path in
 10581  these docs. Note that anything inside remote:path will be encrypted and
 10582  anything outside won’t. This means that if you are using a bucket based
 10583  remote (eg S3, B2, swift) then you should probably put the bucket in the
 10584  remote s3:bucket. If you just use s3: then rclone will make encrypted
 10585  bucket names too (if using file name encryption) which may or may not be
 10586  what you want.
 10587  
 10588  Now configure crypt using rclone config. We will call this one secret to
 10589  differentiate it from the remote.
 10590  
 10591      No remotes found - make a new one
 10592      n) New remote
 10593      s) Set configuration password
 10594      q) Quit config
 10595      n/s/q> n   
 10596      name> secret
 10597      Type of storage to configure.
 10598      Choose a number from below, or type in your own value
 10599       1 / Amazon Drive
 10600         \ "amazon cloud drive"
 10601       2 / Amazon S3 (also Dreamhost, Ceph, Minio)
 10602         \ "s3"
 10603       3 / Backblaze B2
 10604         \ "b2"
 10605       4 / Dropbox
 10606         \ "dropbox"
 10607       5 / Encrypt/Decrypt a remote
 10608         \ "crypt"
 10609       6 / Google Cloud Storage (this is not Google Drive)
 10610         \ "google cloud storage"
 10611       7 / Google Drive
 10612         \ "drive"
 10613       8 / Hubic
 10614         \ "hubic"
 10615       9 / Local Disk
 10616         \ "local"
 10617      10 / Microsoft OneDrive
 10618         \ "onedrive"
 10619      11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
 10620         \ "swift"
 10621      12 / SSH/SFTP Connection
 10622         \ "sftp"
 10623      13 / Yandex Disk
 10624         \ "yandex"
 10625      Storage> 5
 10626      Remote to encrypt/decrypt.
 10627      Normally should contain a ':' and a path, eg "myremote:path/to/dir",
 10628      "myremote:bucket" or maybe "myremote:" (not recommended).
 10629      remote> remote:path
 10630      How to encrypt the filenames.
 10631      Choose a number from below, or type in your own value
 10632       1 / Don't encrypt the file names.  Adds a ".bin" extension only.
 10633         \ "off"
 10634       2 / Encrypt the filenames see the docs for the details.
 10635         \ "standard"
 10636       3 / Very simple filename obfuscation.
 10637         \ "obfuscate"
 10638      filename_encryption> 2
 10639      Option to either encrypt directory names or leave them intact.
 10640      Choose a number from below, or type in your own value
 10641       1 / Encrypt directory names.
 10642         \ "true"
 10643       2 / Don't encrypt directory names, leave them intact.
 10644         \ "false"
 10645      filename_encryption> 1
 10646      Password or pass phrase for encryption.
 10647      y) Yes type in my own password
 10648      g) Generate random password
 10649      y/g> y
 10650      Enter the password:
 10651      password:
 10652      Confirm the password:
 10653      password:
 10654      Password or pass phrase for salt. Optional but recommended.
 10655      Should be different to the previous password.
 10656      y) Yes type in my own password
 10657      g) Generate random password
 10658      n) No leave this optional password blank
 10659      y/g/n> g
 10660      Password strength in bits.
 10661      64 is just about memorable
 10662      128 is secure
 10663      1024 is the maximum
 10664      Bits> 128
 10665      Your password is: JAsJvRcgR-_veXNfy_sGmQ
 10666      Use this password?
 10667      y) Yes
 10668      n) No
 10669      y/n> y
 10670      Remote config
 10671      --------------------
 10672      [secret]
 10673      remote = remote:path
 10674      filename_encryption = standard
 10675      password = *** ENCRYPTED ***
 10676      password2 = *** ENCRYPTED ***
 10677      --------------------
 10678      y) Yes this is OK
 10679      e) Edit this remote
 10680      d) Delete this remote
 10681      y/e/d> y
 10682  
 10683  IMPORTANT The password is stored in the config file is lightly obscured
 10684  so it isn’t immediately obvious what it is. It is in no way secure
 10685  unless you use config file encryption.
 10686  
 10687  A long passphrase is recommended, or you can use a random one. Note that
 10688  if you reconfigure rclone with the same passwords/passphrases elsewhere
 10689  it will be compatible - all the secrets used are derived from those two
 10690  passwords/passphrases.
 10691  
 10692  Note that rclone does not encrypt
 10693  
 10694  -   file length - this can be calcuated within 16 bytes
 10695  -   modification time - used for syncing
 10696  
 10697  
 10698  Specifying the remote
 10699  
 10700  In normal use, make sure the remote has a : in. If you specify the
 10701  remote without a : then rclone will use a local directory of that name.
 10702  So if you use a remote of /path/to/secret/files then rclone will encrypt
 10703  stuff to that directory. If you use a remote of name then rclone will
 10704  put files in a directory called name in the current directory.
 10705  
 10706  If you specify the remote as remote:path/to/dir then rclone will store
 10707  encrypted files in path/to/dir on the remote. If you are using file name
 10708  encryption, then when you save files to secret:subdir/subfile this will
 10709  store them in the unencrypted path path/to/dir but the subdir/subpath
 10710  bit will be encrypted.
 10711  
 10712  Note that unless you want encrypted bucket names (which are difficult to
 10713  manage because you won’t know what directory they represent in web
 10714  interfaces etc), you should probably specify a bucket, eg
 10715  remote:secretbucket when using bucket based remotes such as S3, Swift,
 10716  Hubic, B2, GCS.
 10717  
 10718  
 10719  Example
 10720  
 10721  To test I made a little directory of files using “standard” file name
 10722  encryption.
 10723  
 10724      plaintext/
 10725      ├── file0.txt
 10726      ├── file1.txt
 10727      └── subdir
 10728          ├── file2.txt
 10729          ├── file3.txt
 10730          └── subsubdir
 10731              └── file4.txt
 10732  
 10733  Copy these to the remote and list them back
 10734  
 10735      $ rclone -q copy plaintext secret:
 10736      $ rclone -q ls secret:
 10737              7 file1.txt
 10738              6 file0.txt
 10739              8 subdir/file2.txt
 10740             10 subdir/subsubdir/file4.txt
 10741              9 subdir/file3.txt
 10742  
 10743  Now see what that looked like when encrypted
 10744  
 10745      $ rclone -q ls remote:path
 10746             55 hagjclgavj2mbiqm6u6cnjjqcg
 10747             54 v05749mltvv1tf4onltun46gls
 10748             57 86vhrsv86mpbtd3a0akjuqslj8/dlj7fkq4kdq72emafg7a7s41uo
 10749             58 86vhrsv86mpbtd3a0akjuqslj8/7uu829995du6o42n32otfhjqp4/b9pausrfansjth5ob3jkdqd4lc
 10750             56 86vhrsv86mpbtd3a0akjuqslj8/8njh1sk437gttmep3p70g81aps
 10751  
 10752  Note that this retains the directory structure which means you can do
 10753  this
 10754  
 10755      $ rclone -q ls secret:subdir
 10756              8 file2.txt
 10757              9 file3.txt
 10758             10 subsubdir/file4.txt
 10759  
 10760  If don’t use file name encryption then the remote will look like this -
 10761  note the .bin extensions added to prevent the cloud provider attempting
 10762  to interpret the data.
 10763  
 10764      $ rclone -q ls remote:path
 10765             54 file0.txt.bin
 10766             57 subdir/file3.txt.bin
 10767             56 subdir/file2.txt.bin
 10768             58 subdir/subsubdir/file4.txt.bin
 10769             55 file1.txt.bin
 10770  
 10771  File name encryption modes
 10772  
 10773  Here are some of the features of the file name encryption modes
 10774  
 10775  Off
 10776  
 10777  -   doesn’t hide file names or directory structure
 10778  -   allows for longer file names (~246 characters)
 10779  -   can use sub paths and copy single files
 10780  
 10781  Standard
 10782  
 10783  -   file names encrypted
 10784  -   file names can’t be as long (~143 characters)
 10785  -   can use sub paths and copy single files
 10786  -   directory structure visible
 10787  -   identical files names will have identical uploaded names
 10788  -   can use shortcuts to shorten the directory recursion
 10789  
 10790  Obfuscation
 10791  
 10792  This is a simple “rotate” of the filename, with each file having a rot
 10793  distance based on the filename. We store the distance at the beginning
 10794  of the filename. So a file called “hello” may become “53.jgnnq”
 10795  
 10796  This is not a strong encryption of filenames, but it may stop automated
 10797  scanning tools from picking up on filename patterns. As such it’s an
 10798  intermediate between “off” and “standard”. The advantage is that it
 10799  allows for longer path segment names.
 10800  
 10801  There is a possibility with some unicode based filenames that the
 10802  obfuscation is weak and may map lower case characters to upper case
 10803  equivalents. You can not rely on this for strong protection.
 10804  
 10805  -   file names very lightly obfuscated
 10806  -   file names can be longer than standard encryption
 10807  -   can use sub paths and copy single files
 10808  -   directory structure visible
 10809  -   identical files names will have identical uploaded names
 10810  
 10811  Cloud storage systems have various limits on file name length and total
 10812  path length which you are more likely to hit using “Standard” file name
 10813  encryption. If you keep your file names to below 156 characters in
 10814  length then you should be OK on all providers.
 10815  
 10816  There may be an even more secure file name encryption mode in the future
 10817  which will address the long file name problem.
 10818  
 10819  Directory name encryption
 10820  
 10821  Crypt offers the option of encrypting dir names or leaving them intact.
 10822  There are two options:
 10823  
 10824  True
 10825  
 10826  Encrypts the whole file path including directory names Example:
 10827  1/12/123.txt is encrypted to
 10828  p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng/qgm4avr35m5loi1th53ato71v0
 10829  
 10830  False
 10831  
 10832  Only encrypts file names, skips directory names Example: 1/12/123.txt is
 10833  encrypted to 1/12/qgm4avr35m5loi1th53ato71v0
 10834  
 10835  Modified time and hashes
 10836  
 10837  Crypt stores modification times using the underlying remote so support
 10838  depends on that.
 10839  
 10840  Hashes are not stored for crypt. However the data integrity is protected
 10841  by an extremely strong crypto authenticator.
 10842  
 10843  Note that you should use the rclone cryptcheck command to check the
 10844  integrity of a crypted remote instead of rclone check which can’t check
 10845  the checksums properly.
 10846  
 10847  Standard Options
 10848  
 10849  Here are the standard options specific to crypt (Encrypt/Decrypt a
 10850  remote).
 10851  
 10852  –crypt-remote
 10853  
 10854  Remote to encrypt/decrypt. Normally should contain a ‘:’ and a path, eg
 10855  “myremote:path/to/dir”, “myremote:bucket” or maybe “myremote:” (not
 10856  recommended).
 10857  
 10858  -   Config: remote
 10859  -   Env Var: RCLONE_CRYPT_REMOTE
 10860  -   Type: string
 10861  -   Default: ""
 10862  
 10863  –crypt-filename-encryption
 10864  
 10865  How to encrypt the filenames.
 10866  
 10867  -   Config: filename_encryption
 10868  -   Env Var: RCLONE_CRYPT_FILENAME_ENCRYPTION
 10869  -   Type: string
 10870  -   Default: “standard”
 10871  -   Examples:
 10872      -   “off”
 10873          -   Don’t encrypt the file names. Adds a “.bin” extension only.
 10874      -   “standard”
 10875          -   Encrypt the filenames see the docs for the details.
 10876      -   “obfuscate”
 10877          -   Very simple filename obfuscation.
 10878  
 10879  –crypt-directory-name-encryption
 10880  
 10881  Option to either encrypt directory names or leave them intact.
 10882  
 10883  -   Config: directory_name_encryption
 10884  -   Env Var: RCLONE_CRYPT_DIRECTORY_NAME_ENCRYPTION
 10885  -   Type: bool
 10886  -   Default: true
 10887  -   Examples:
 10888      -   “true”
 10889          -   Encrypt directory names.
 10890      -   “false”
 10891          -   Don’t encrypt directory names, leave them intact.
 10892  
 10893  –crypt-password
 10894  
 10895  Password or pass phrase for encryption.
 10896  
 10897  -   Config: password
 10898  -   Env Var: RCLONE_CRYPT_PASSWORD
 10899  -   Type: string
 10900  -   Default: ""
 10901  
 10902  –crypt-password2
 10903  
 10904  Password or pass phrase for salt. Optional but recommended. Should be
 10905  different to the previous password.
 10906  
 10907  -   Config: password2
 10908  -   Env Var: RCLONE_CRYPT_PASSWORD2
 10909  -   Type: string
 10910  -   Default: ""
 10911  
 10912  Advanced Options
 10913  
 10914  Here are the advanced options specific to crypt (Encrypt/Decrypt a
 10915  remote).
 10916  
 10917  –crypt-show-mapping
 10918  
 10919  For all files listed show how the names encrypt.
 10920  
 10921  If this flag is set then for each file that the remote is asked to list,
 10922  it will log (at level INFO) a line stating the decrypted file name and
 10923  the encrypted file name.
 10924  
 10925  This is so you can work out which encrypted names are which decrypted
 10926  names just in case you need to do something with the encrypted file
 10927  names, or for debugging purposes.
 10928  
 10929  -   Config: show_mapping
 10930  -   Env Var: RCLONE_CRYPT_SHOW_MAPPING
 10931  -   Type: bool
 10932  -   Default: false
 10933  
 10934  
 10935  Backing up a crypted remote
 10936  
 10937  If you wish to backup a crypted remote, it it recommended that you use
 10938  rclone sync on the encrypted files, and make sure the passwords are the
 10939  same in the new encrypted remote.
 10940  
 10941  This will have the following advantages
 10942  
 10943  -   rclone sync will check the checksums while copying
 10944  -   you can use rclone check between the encrypted remotes
 10945  -   you don’t decrypt and encrypt unnecessarily
 10946  
 10947  For example, let’s say you have your original remote at remote: with the
 10948  encrypted version at eremote: with path remote:crypt. You would then set
 10949  up the new remote remote2: and then the encrypted version eremote2: with
 10950  path remote2:crypt using the same passwords as eremote:.
 10951  
 10952  To sync the two remotes you would do
 10953  
 10954      rclone sync remote:crypt remote2:crypt
 10955  
 10956  And to check the integrity you would do
 10957  
 10958      rclone check remote:crypt remote2:crypt
 10959  
 10960  
 10961  File formats
 10962  
 10963  File encryption
 10964  
 10965  Files are encrypted 1:1 source file to destination object. The file has
 10966  a header and is divided into chunks.
 10967  
 10968  Header
 10969  
 10970  -   8 bytes magic string RCLONE\x00\x00
 10971  -   24 bytes Nonce (IV)
 10972  
 10973  The initial nonce is generated from the operating systems crypto strong
 10974  random number generator. The nonce is incremented for each chunk read
 10975  making sure each nonce is unique for each block written. The chance of a
 10976  nonce being re-used is minuscule. If you wrote an exabyte of data (10¹⁸
 10977  bytes) you would have a probability of approximately 2×10⁻³² of re-using
 10978  a nonce.
 10979  
 10980  Chunk
 10981  
 10982  Each chunk will contain 64kB of data, except for the last one which may
 10983  have less data. The data chunk is in standard NACL secretbox format.
 10984  Secretbox uses XSalsa20 and Poly1305 to encrypt and authenticate
 10985  messages.
 10986  
 10987  Each chunk contains:
 10988  
 10989  -   16 Bytes of Poly1305 authenticator
 10990  -   1 - 65536 bytes XSalsa20 encrypted data
 10991  
 10992  64k chunk size was chosen as the best performing chunk size (the
 10993  authenticator takes too much time below this and the performance drops
 10994  off due to cache effects above this). Note that these chunks are
 10995  buffered in memory so they can’t be too big.
 10996  
 10997  This uses a 32 byte (256 bit key) key derived from the user password.
 10998  
 10999  Examples
 11000  
 11001  1 byte file will encrypt to
 11002  
 11003  -   32 bytes header
 11004  -   17 bytes data chunk
 11005  
 11006  49 bytes total
 11007  
 11008  1MB (1048576 bytes) file will encrypt to
 11009  
 11010  -   32 bytes header
 11011  -   16 chunks of 65568 bytes
 11012  
 11013  1049120 bytes total (a 0.05% overhead). This is the overhead for big
 11014  files.
 11015  
 11016  Name encryption
 11017  
 11018  File names are encrypted segment by segment - the path is broken up into
 11019  / separated strings and these are encrypted individually.
 11020  
 11021  File segments are padded using using PKCS#7 to a multiple of 16 bytes
 11022  before encryption.
 11023  
 11024  They are then encrypted with EME using AES with 256 bit key. EME
 11025  (ECB-Mix-ECB) is a wide-block encryption mode presented in the 2003
 11026  paper “A Parallelizable Enciphering Mode” by Halevi and Rogaway.
 11027  
 11028  This makes for deterministic encryption which is what we want - the same
 11029  filename must encrypt to the same thing otherwise we can’t find it on
 11030  the cloud storage system.
 11031  
 11032  This means that
 11033  
 11034  -   filenames with the same name will encrypt the same
 11035  -   filenames which start the same won’t have a common prefix
 11036  
 11037  This uses a 32 byte key (256 bits) and a 16 byte (128 bits) IV both of
 11038  which are derived from the user password.
 11039  
 11040  After encryption they are written out using a modified version of
 11041  standard base32 encoding as described in RFC4648. The standard encoding
 11042  is modified in two ways:
 11043  
 11044  -   it becomes lower case (no-one likes upper case filenames!)
 11045  -   we strip the padding character =
 11046  
 11047  base32 is used rather than the more efficient base64 so rclone can be
 11048  used on case insensitive remotes (eg Windows, Amazon Drive).
 11049  
 11050  Key derivation
 11051  
 11052  Rclone uses scrypt with parameters N=16384, r=8, p=1 with an optional
 11053  user supplied salt (password2) to derive the 32+32+16 = 80 bytes of key
 11054  material required. If the user doesn’t supply a salt then rclone uses an
 11055  internal one.
 11056  
 11057  scrypt makes it impractical to mount a dictionary attack on rclone
 11058  encrypted data. For full protection against this you should always use a
 11059  salt.
 11060  
 11061  
 11062  Dropbox
 11063  
 11064  Paths are specified as remote:path
 11065  
 11066  Dropbox paths may be as deep as required, eg
 11067  remote:directory/subdirectory.
 11068  
 11069  The initial setup for dropbox involves getting a token from Dropbox
 11070  which you need to do in your browser. rclone config walks you through
 11071  it.
 11072  
 11073  Here is an example of how to make a remote called remote. First run:
 11074  
 11075       rclone config
 11076  
 11077  This will guide you through an interactive setup process:
 11078  
 11079      n) New remote
 11080      d) Delete remote
 11081      q) Quit config
 11082      e/n/d/q> n
 11083      name> remote
 11084      Type of storage to configure.
 11085      Choose a number from below, or type in your own value
 11086       1 / Amazon Drive
 11087         \ "amazon cloud drive"
 11088       2 / Amazon S3 (also Dreamhost, Ceph, Minio)
 11089         \ "s3"
 11090       3 / Backblaze B2
 11091         \ "b2"
 11092       4 / Dropbox
 11093         \ "dropbox"
 11094       5 / Encrypt/Decrypt a remote
 11095         \ "crypt"
 11096       6 / Google Cloud Storage (this is not Google Drive)
 11097         \ "google cloud storage"
 11098       7 / Google Drive
 11099         \ "drive"
 11100       8 / Hubic
 11101         \ "hubic"
 11102       9 / Local Disk
 11103         \ "local"
 11104      10 / Microsoft OneDrive
 11105         \ "onedrive"
 11106      11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
 11107         \ "swift"
 11108      12 / SSH/SFTP Connection
 11109         \ "sftp"
 11110      13 / Yandex Disk
 11111         \ "yandex"
 11112      Storage> 4
 11113      Dropbox App Key - leave blank normally.
 11114      app_key>
 11115      Dropbox App Secret - leave blank normally.
 11116      app_secret>
 11117      Remote config
 11118      Please visit:
 11119      https://www.dropbox.com/1/oauth2/authorize?client_id=XXXXXXXXXXXXXXX&response_type=code
 11120      Enter the code: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX_XXXXXXXXXX
 11121      --------------------
 11122      [remote]
 11123      app_key =
 11124      app_secret =
 11125      token = XXXXXXXXXXXXXXXXXXXXXXXXXXXXX_XXXX_XXXXXXXXXXXXXXXXXXXXXXXXXXXXX
 11126      --------------------
 11127      y) Yes this is OK
 11128      e) Edit this remote
 11129      d) Delete this remote
 11130      y/e/d> y
 11131  
 11132  You can then use it like this,
 11133  
 11134  List directories in top level of your dropbox
 11135  
 11136      rclone lsd remote:
 11137  
 11138  List all the files in your dropbox
 11139  
 11140      rclone ls remote:
 11141  
 11142  To copy a local directory to a dropbox directory called backup
 11143  
 11144      rclone copy /home/source remote:backup
 11145  
 11146  Dropbox for business
 11147  
 11148  Rclone supports Dropbox for business and Team Folders.
 11149  
 11150  When using Dropbox for business remote: and remote:path/to/file will
 11151  refer to your personal folder.
 11152  
 11153  If you wish to see Team Folders you must use a leading / in the path, so
 11154  rclone lsd remote:/ will refer to the root and show you all Team Folders
 11155  and your User Folder.
 11156  
 11157  You can then use team folders like this remote:/TeamFolder and
 11158  remote:/TeamFolder/path/to/file.
 11159  
 11160  A leading / for a Dropbox personal account will do nothing, but it will
 11161  take an extra HTTP transaction so it should be avoided.
 11162  
 11163  Modified time and Hashes
 11164  
 11165  Dropbox supports modified times, but the only way to set a modification
 11166  time is to re-upload the file.
 11167  
 11168  This means that if you uploaded your data with an older version of
 11169  rclone which didn’t support the v2 API and modified times, rclone will
 11170  decide to upload all your old data to fix the modification times. If you
 11171  don’t want this to happen use --size-only or --checksum flag to stop it.
 11172  
 11173  Dropbox supports its own hash type which is checked for all transfers.
 11174  
 11175  Standard Options
 11176  
 11177  Here are the standard options specific to dropbox (Dropbox).
 11178  
 11179  –dropbox-client-id
 11180  
 11181  Dropbox App Client Id Leave blank normally.
 11182  
 11183  -   Config: client_id
 11184  -   Env Var: RCLONE_DROPBOX_CLIENT_ID
 11185  -   Type: string
 11186  -   Default: ""
 11187  
 11188  –dropbox-client-secret
 11189  
 11190  Dropbox App Client Secret Leave blank normally.
 11191  
 11192  -   Config: client_secret
 11193  -   Env Var: RCLONE_DROPBOX_CLIENT_SECRET
 11194  -   Type: string
 11195  -   Default: ""
 11196  
 11197  Advanced Options
 11198  
 11199  Here are the advanced options specific to dropbox (Dropbox).
 11200  
 11201  –dropbox-chunk-size
 11202  
 11203  Upload chunk size. (< 150M).
 11204  
 11205  Any files larger than this will be uploaded in chunks of this size.
 11206  
 11207  Note that chunks are buffered in memory (one at a time) so rclone can
 11208  deal with retries. Setting this larger will increase the speed slightly
 11209  (at most 10% for 128MB in tests) at the cost of using more memory. It
 11210  can be set smaller if you are tight on memory.
 11211  
 11212  -   Config: chunk_size
 11213  -   Env Var: RCLONE_DROPBOX_CHUNK_SIZE
 11214  -   Type: SizeSuffix
 11215  -   Default: 48M
 11216  
 11217  –dropbox-impersonate
 11218  
 11219  Impersonate this user when using a business account.
 11220  
 11221  -   Config: impersonate
 11222  -   Env Var: RCLONE_DROPBOX_IMPERSONATE
 11223  -   Type: string
 11224  -   Default: ""
 11225  
 11226  Limitations
 11227  
 11228  Note that Dropbox is case insensitive so you can’t have a file called
 11229  “Hello.doc” and one called “hello.doc”.
 11230  
 11231  There are some file names such as thumbs.db which Dropbox can’t store.
 11232  There is a full list of them in the “Ignored Files” section of this
 11233  document. Rclone will issue an error message
 11234  File name disallowed - not uploading if it attempts to upload one of
 11235  those file names, but the sync won’t fail.
 11236  
 11237  If you have more than 10,000 files in a directory then
 11238  rclone purge dropbox:dir will return the error
 11239  Failed to purge: There are too many files involved in this operation. As
 11240  a work-around do an rclone delete dropbox:dir followed by an
 11241  rclone rmdir dropbox:dir.
 11242  
 11243  
 11244  FTP
 11245  
 11246  FTP is the File Transfer Protocol. FTP support is provided using the
 11247  github.com/jlaffaye/ftp package.
 11248  
 11249  Here is an example of making an FTP configuration. First run
 11250  
 11251      rclone config
 11252  
 11253  This will guide you through an interactive setup process. An FTP remote
 11254  only needs a host together with and a username and a password. With
 11255  anonymous FTP server, you will need to use anonymous as username and
 11256  your email address as the password.
 11257  
 11258      No remotes found - make a new one
 11259      n) New remote
 11260      r) Rename remote
 11261      c) Copy remote
 11262      s) Set configuration password
 11263      q) Quit config
 11264      n/r/c/s/q> n
 11265      name> remote
 11266      Type of storage to configure.
 11267      Enter a string value. Press Enter for the default ("").
 11268      Choose a number from below, or type in your own value
 11269      [snip]
 11270      10 / FTP Connection
 11271         \ "ftp"
 11272      [snip]
 11273      Storage> ftp
 11274      ** See help for ftp backend at: https://rclone.org/ftp/ **
 11275  
 11276      FTP host to connect to
 11277      Enter a string value. Press Enter for the default ("").
 11278      Choose a number from below, or type in your own value
 11279       1 / Connect to ftp.example.com
 11280         \ "ftp.example.com"
 11281      host> ftp.example.com
 11282      FTP username, leave blank for current username, ncw
 11283      Enter a string value. Press Enter for the default ("").
 11284      user> 
 11285      FTP port, leave blank to use default (21)
 11286      Enter a string value. Press Enter for the default ("").
 11287      port> 
 11288      FTP password
 11289      y) Yes type in my own password
 11290      g) Generate random password
 11291      y/g> y
 11292      Enter the password:
 11293      password:
 11294      Confirm the password:
 11295      password:
 11296      Use FTP over TLS (Implicit)
 11297      Enter a boolean value (true or false). Press Enter for the default ("false").
 11298      tls> 
 11299      Remote config
 11300      --------------------
 11301      [remote]
 11302      type = ftp
 11303      host = ftp.example.com
 11304      pass = *** ENCRYPTED ***
 11305      --------------------
 11306      y) Yes this is OK
 11307      e) Edit this remote
 11308      d) Delete this remote
 11309      y/e/d> y
 11310  
 11311  This remote is called remote and can now be used like this
 11312  
 11313  See all directories in the home directory
 11314  
 11315      rclone lsd remote:
 11316  
 11317  Make a new directory
 11318  
 11319      rclone mkdir remote:path/to/directory
 11320  
 11321  List the contents of a directory
 11322  
 11323      rclone ls remote:path/to/directory
 11324  
 11325  Sync /home/local/directory to the remote directory, deleting any excess
 11326  files in the directory.
 11327  
 11328      rclone sync /home/local/directory remote:directory
 11329  
 11330  Modified time
 11331  
 11332  FTP does not support modified times. Any times you see on the server
 11333  will be time of upload.
 11334  
 11335  Checksums
 11336  
 11337  FTP does not support any checksums.
 11338  
 11339  Implicit TLS
 11340  
 11341  FTP supports implicit FTP over TLS servers (FTPS). This has to be
 11342  enabled in the config for the remote. The default FTPS port is 990 so
 11343  the port will likely have to be explictly set in the config for the
 11344  remote.
 11345  
 11346  Standard Options
 11347  
 11348  Here are the standard options specific to ftp (FTP Connection).
 11349  
 11350  –ftp-host
 11351  
 11352  FTP host to connect to
 11353  
 11354  -   Config: host
 11355  -   Env Var: RCLONE_FTP_HOST
 11356  -   Type: string
 11357  -   Default: ""
 11358  -   Examples:
 11359      -   “ftp.example.com”
 11360          -   Connect to ftp.example.com
 11361  
 11362  –ftp-user
 11363  
 11364  FTP username, leave blank for current username, $USER
 11365  
 11366  -   Config: user
 11367  -   Env Var: RCLONE_FTP_USER
 11368  -   Type: string
 11369  -   Default: ""
 11370  
 11371  –ftp-port
 11372  
 11373  FTP port, leave blank to use default (21)
 11374  
 11375  -   Config: port
 11376  -   Env Var: RCLONE_FTP_PORT
 11377  -   Type: string
 11378  -   Default: ""
 11379  
 11380  –ftp-pass
 11381  
 11382  FTP password
 11383  
 11384  -   Config: pass
 11385  -   Env Var: RCLONE_FTP_PASS
 11386  -   Type: string
 11387  -   Default: ""
 11388  
 11389  –ftp-tls
 11390  
 11391  Use FTP over TLS (Implicit)
 11392  
 11393  -   Config: tls
 11394  -   Env Var: RCLONE_FTP_TLS
 11395  -   Type: bool
 11396  -   Default: false
 11397  
 11398  Advanced Options
 11399  
 11400  Here are the advanced options specific to ftp (FTP Connection).
 11401  
 11402  –ftp-concurrency
 11403  
 11404  Maximum number of FTP simultaneous connections, 0 for unlimited
 11405  
 11406  -   Config: concurrency
 11407  -   Env Var: RCLONE_FTP_CONCURRENCY
 11408  -   Type: int
 11409  -   Default: 0
 11410  
 11411  –ftp-no-check-certificate
 11412  
 11413  Do not verify the TLS certificate of the server
 11414  
 11415  -   Config: no_check_certificate
 11416  -   Env Var: RCLONE_FTP_NO_CHECK_CERTIFICATE
 11417  -   Type: bool
 11418  -   Default: false
 11419  
 11420  Limitations
 11421  
 11422  Note that since FTP isn’t HTTP based the following flags don’t work with
 11423  it: --dump-headers, --dump-bodies, --dump-auth
 11424  
 11425  Note that --timeout isn’t supported (but --contimeout is).
 11426  
 11427  Note that --bind isn’t supported.
 11428  
 11429  FTP could support server side move but doesn’t yet.
 11430  
 11431  Note that the ftp backend does not support the ftp_proxy environment
 11432  variable yet.
 11433  
 11434  Note that while implicit FTP over TLS is supported, explicit FTP over
 11435  TLS is not.
 11436  
 11437  
 11438  Google Cloud Storage
 11439  
 11440  Paths are specified as remote:bucket (or remote: for the lsd command.)
 11441  You may put subdirectories in too, eg remote:bucket/path/to/dir.
 11442  
 11443  The initial setup for google cloud storage involves getting a token from
 11444  Google Cloud Storage which you need to do in your browser. rclone config
 11445  walks you through it.
 11446  
 11447  Here is an example of how to make a remote called remote. First run:
 11448  
 11449       rclone config
 11450  
 11451  This will guide you through an interactive setup process:
 11452  
 11453      n) New remote
 11454      d) Delete remote
 11455      q) Quit config
 11456      e/n/d/q> n
 11457      name> remote
 11458      Type of storage to configure.
 11459      Choose a number from below, or type in your own value
 11460       1 / Amazon Drive
 11461         \ "amazon cloud drive"
 11462       2 / Amazon S3 (also Dreamhost, Ceph, Minio)
 11463         \ "s3"
 11464       3 / Backblaze B2
 11465         \ "b2"
 11466       4 / Dropbox
 11467         \ "dropbox"
 11468       5 / Encrypt/Decrypt a remote
 11469         \ "crypt"
 11470       6 / Google Cloud Storage (this is not Google Drive)
 11471         \ "google cloud storage"
 11472       7 / Google Drive
 11473         \ "drive"
 11474       8 / Hubic
 11475         \ "hubic"
 11476       9 / Local Disk
 11477         \ "local"
 11478      10 / Microsoft OneDrive
 11479         \ "onedrive"
 11480      11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
 11481         \ "swift"
 11482      12 / SSH/SFTP Connection
 11483         \ "sftp"
 11484      13 / Yandex Disk
 11485         \ "yandex"
 11486      Storage> 6
 11487      Google Application Client Id - leave blank normally.
 11488      client_id>
 11489      Google Application Client Secret - leave blank normally.
 11490      client_secret>
 11491      Project number optional - needed only for list/create/delete buckets - see your developer console.
 11492      project_number> 12345678
 11493      Service Account Credentials JSON file path - needed only if you want use SA instead of interactive login.
 11494      service_account_file>
 11495      Access Control List for new objects.
 11496      Choose a number from below, or type in your own value
 11497       1 / Object owner gets OWNER access, and all Authenticated Users get READER access.
 11498         \ "authenticatedRead"
 11499       2 / Object owner gets OWNER access, and project team owners get OWNER access.
 11500         \ "bucketOwnerFullControl"
 11501       3 / Object owner gets OWNER access, and project team owners get READER access.
 11502         \ "bucketOwnerRead"
 11503       4 / Object owner gets OWNER access [default if left blank].
 11504         \ "private"
 11505       5 / Object owner gets OWNER access, and project team members get access according to their roles.
 11506         \ "projectPrivate"
 11507       6 / Object owner gets OWNER access, and all Users get READER access.
 11508         \ "publicRead"
 11509      object_acl> 4
 11510      Access Control List for new buckets.
 11511      Choose a number from below, or type in your own value
 11512       1 / Project team owners get OWNER access, and all Authenticated Users get READER access.
 11513         \ "authenticatedRead"
 11514       2 / Project team owners get OWNER access [default if left blank].
 11515         \ "private"
 11516       3 / Project team members get access according to their roles.
 11517         \ "projectPrivate"
 11518       4 / Project team owners get OWNER access, and all Users get READER access.
 11519         \ "publicRead"
 11520       5 / Project team owners get OWNER access, and all Users get WRITER access.
 11521         \ "publicReadWrite"
 11522      bucket_acl> 2
 11523      Location for the newly created buckets.
 11524      Choose a number from below, or type in your own value
 11525       1 / Empty for default location (US).
 11526         \ ""
 11527       2 / Multi-regional location for Asia.
 11528         \ "asia"
 11529       3 / Multi-regional location for Europe.
 11530         \ "eu"
 11531       4 / Multi-regional location for United States.
 11532         \ "us"
 11533       5 / Taiwan.
 11534         \ "asia-east1"
 11535       6 / Tokyo.
 11536         \ "asia-northeast1"
 11537       7 / Singapore.
 11538         \ "asia-southeast1"
 11539       8 / Sydney.
 11540         \ "australia-southeast1"
 11541       9 / Belgium.
 11542         \ "europe-west1"
 11543      10 / London.
 11544         \ "europe-west2"
 11545      11 / Iowa.
 11546         \ "us-central1"
 11547      12 / South Carolina.
 11548         \ "us-east1"
 11549      13 / Northern Virginia.
 11550         \ "us-east4"
 11551      14 / Oregon.
 11552         \ "us-west1"
 11553      location> 12
 11554      The storage class to use when storing objects in Google Cloud Storage.
 11555      Choose a number from below, or type in your own value
 11556       1 / Default
 11557         \ ""
 11558       2 / Multi-regional storage class
 11559         \ "MULTI_REGIONAL"
 11560       3 / Regional storage class
 11561         \ "REGIONAL"
 11562       4 / Nearline storage class
 11563         \ "NEARLINE"
 11564       5 / Coldline storage class
 11565         \ "COLDLINE"
 11566       6 / Durable reduced availability storage class
 11567         \ "DURABLE_REDUCED_AVAILABILITY"
 11568      storage_class> 5
 11569      Remote config
 11570      Use auto config?
 11571       * Say Y if not sure
 11572       * Say N if you are working on a remote or headless machine or Y didn't work
 11573      y) Yes
 11574      n) No
 11575      y/n> y
 11576      If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
 11577      Log in and authorize rclone for access
 11578      Waiting for code...
 11579      Got code
 11580      --------------------
 11581      [remote]
 11582      type = google cloud storage
 11583      client_id =
 11584      client_secret =
 11585      token = {"AccessToken":"xxxx.xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"x/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx_xxxxxxxxx","Expiry":"2014-07-17T20:49:14.929208288+01:00","Extra":null}
 11586      project_number = 12345678
 11587      object_acl = private
 11588      bucket_acl = private
 11589      --------------------
 11590      y) Yes this is OK
 11591      e) Edit this remote
 11592      d) Delete this remote
 11593      y/e/d> y
 11594  
 11595  Note that rclone runs a webserver on your local machine to collect the
 11596  token as returned from Google if you use auto config mode. This only
 11597  runs from the moment it opens your browser to the moment you get back
 11598  the verification code. This is on http://127.0.0.1:53682/ and this it
 11599  may require you to unblock it temporarily if you are running a host
 11600  firewall, or use manual mode.
 11601  
 11602  This remote is called remote and can now be used like this
 11603  
 11604  See all the buckets in your project
 11605  
 11606      rclone lsd remote:
 11607  
 11608  Make a new bucket
 11609  
 11610      rclone mkdir remote:bucket
 11611  
 11612  List the contents of a bucket
 11613  
 11614      rclone ls remote:bucket
 11615  
 11616  Sync /home/local/directory to the remote bucket, deleting any excess
 11617  files in the bucket.
 11618  
 11619      rclone sync /home/local/directory remote:bucket
 11620  
 11621  Service Account support
 11622  
 11623  You can set up rclone with Google Cloud Storage in an unattended mode,
 11624  i.e. not tied to a specific end-user Google account. This is useful when
 11625  you want to synchronise files onto machines that don’t have actively
 11626  logged-in users, for example build machines.
 11627  
 11628  To get credentials for Google Cloud Platform IAM Service Accounts,
 11629  please head to the Service Account section of the Google Developer
 11630  Console. Service Accounts behave just like normal User permissions in
 11631  Google Cloud Storage ACLs, so you can limit their access (e.g. make them
 11632  read only). After creating an account, a JSON file containing the
 11633  Service Account’s credentials will be downloaded onto your machines.
 11634  These credentials are what rclone will use for authentication.
 11635  
 11636  To use a Service Account instead of OAuth2 token flow, enter the path to
 11637  your Service Account credentials at the service_account_file prompt and
 11638  rclone won’t use the browser based authentication flow. If you’d rather
 11639  stuff the contents of the credentials file into the rclone config file,
 11640  you can set service_account_credentials with the actual contents of the
 11641  file instead, or set the equivalent environment variable.
 11642  
 11643  Application Default Credentials
 11644  
 11645  If no other source of credentials is provided, rclone will fall back to
 11646  Application Default Credentials this is useful both when you already
 11647  have configured authentication for your developer account, or in
 11648  production when running on a google compute host. Note that if running
 11649  in docker, you may need to run additional commands on your google
 11650  compute machine - see this page.
 11651  
 11652  Note that in the case application default credentials are used, there is
 11653  no need to explicitly configure a project number.
 11654  
 11655  –fast-list
 11656  
 11657  This remote supports --fast-list which allows you to use fewer
 11658  transactions in exchange for more memory. See the rclone docs for more
 11659  details.
 11660  
 11661  Modified time
 11662  
 11663  Google google cloud storage stores md5sums natively and rclone stores
 11664  modification times as metadata on the object, under the “mtime” key in
 11665  RFC3339 format accurate to 1ns.
 11666  
 11667  Standard Options
 11668  
 11669  Here are the standard options specific to google cloud storage (Google
 11670  Cloud Storage (this is not Google Drive)).
 11671  
 11672  –gcs-client-id
 11673  
 11674  Google Application Client Id Leave blank normally.
 11675  
 11676  -   Config: client_id
 11677  -   Env Var: RCLONE_GCS_CLIENT_ID
 11678  -   Type: string
 11679  -   Default: ""
 11680  
 11681  –gcs-client-secret
 11682  
 11683  Google Application Client Secret Leave blank normally.
 11684  
 11685  -   Config: client_secret
 11686  -   Env Var: RCLONE_GCS_CLIENT_SECRET
 11687  -   Type: string
 11688  -   Default: ""
 11689  
 11690  –gcs-project-number
 11691  
 11692  Project number. Optional - needed only for list/create/delete buckets -
 11693  see your developer console.
 11694  
 11695  -   Config: project_number
 11696  -   Env Var: RCLONE_GCS_PROJECT_NUMBER
 11697  -   Type: string
 11698  -   Default: ""
 11699  
 11700  –gcs-service-account-file
 11701  
 11702  Service Account Credentials JSON file path Leave blank normally. Needed
 11703  only if you want use SA instead of interactive login.
 11704  
 11705  -   Config: service_account_file
 11706  -   Env Var: RCLONE_GCS_SERVICE_ACCOUNT_FILE
 11707  -   Type: string
 11708  -   Default: ""
 11709  
 11710  –gcs-service-account-credentials
 11711  
 11712  Service Account Credentials JSON blob Leave blank normally. Needed only
 11713  if you want use SA instead of interactive login.
 11714  
 11715  -   Config: service_account_credentials
 11716  -   Env Var: RCLONE_GCS_SERVICE_ACCOUNT_CREDENTIALS
 11717  -   Type: string
 11718  -   Default: ""
 11719  
 11720  –gcs-object-acl
 11721  
 11722  Access Control List for new objects.
 11723  
 11724  -   Config: object_acl
 11725  -   Env Var: RCLONE_GCS_OBJECT_ACL
 11726  -   Type: string
 11727  -   Default: ""
 11728  -   Examples:
 11729      -   “authenticatedRead”
 11730          -   Object owner gets OWNER access, and all Authenticated Users
 11731              get READER access.
 11732      -   “bucketOwnerFullControl”
 11733          -   Object owner gets OWNER access, and project team owners get
 11734              OWNER access.
 11735      -   “bucketOwnerRead”
 11736          -   Object owner gets OWNER access, and project team owners get
 11737              READER access.
 11738      -   “private”
 11739          -   Object owner gets OWNER access [default if left blank].
 11740      -   “projectPrivate”
 11741          -   Object owner gets OWNER access, and project team members get
 11742              access according to their roles.
 11743      -   “publicRead”
 11744          -   Object owner gets OWNER access, and all Users get READER
 11745              access.
 11746  
 11747  –gcs-bucket-acl
 11748  
 11749  Access Control List for new buckets.
 11750  
 11751  -   Config: bucket_acl
 11752  -   Env Var: RCLONE_GCS_BUCKET_ACL
 11753  -   Type: string
 11754  -   Default: ""
 11755  -   Examples:
 11756      -   “authenticatedRead”
 11757          -   Project team owners get OWNER access, and all Authenticated
 11758              Users get READER access.
 11759      -   “private”
 11760          -   Project team owners get OWNER access [default if left
 11761              blank].
 11762      -   “projectPrivate”
 11763          -   Project team members get access according to their roles.
 11764      -   “publicRead”
 11765          -   Project team owners get OWNER access, and all Users get
 11766              READER access.
 11767      -   “publicReadWrite”
 11768          -   Project team owners get OWNER access, and all Users get
 11769              WRITER access.
 11770  
 11771  –gcs-bucket-policy-only
 11772  
 11773  Access checks should use bucket-level IAM policies.
 11774  
 11775  If you want to upload objects to a bucket with Bucket Policy Only set
 11776  then you will need to set this.
 11777  
 11778  When it is set, rclone:
 11779  
 11780  -   ignores ACLs set on buckets
 11781  -   ignores ACLs set on objects
 11782  -   creates buckets with Bucket Policy Only set
 11783  
 11784  Docs: https://cloud.google.com/storage/docs/bucket-policy-only
 11785  
 11786  -   Config: bucket_policy_only
 11787  -   Env Var: RCLONE_GCS_BUCKET_POLICY_ONLY
 11788  -   Type: bool
 11789  -   Default: false
 11790  
 11791  –gcs-location
 11792  
 11793  Location for the newly created buckets.
 11794  
 11795  -   Config: location
 11796  -   Env Var: RCLONE_GCS_LOCATION
 11797  -   Type: string
 11798  -   Default: ""
 11799  -   Examples:
 11800      -   ""
 11801          -   Empty for default location (US).
 11802      -   “asia”
 11803          -   Multi-regional location for Asia.
 11804      -   “eu”
 11805          -   Multi-regional location for Europe.
 11806      -   “us”
 11807          -   Multi-regional location for United States.
 11808      -   “asia-east1”
 11809          -   Taiwan.
 11810      -   “asia-east2”
 11811          -   Hong Kong.
 11812      -   “asia-northeast1”
 11813          -   Tokyo.
 11814      -   “asia-south1”
 11815          -   Mumbai.
 11816      -   “asia-southeast1”
 11817          -   Singapore.
 11818      -   “australia-southeast1”
 11819          -   Sydney.
 11820      -   “europe-north1”
 11821          -   Finland.
 11822      -   “europe-west1”
 11823          -   Belgium.
 11824      -   “europe-west2”
 11825          -   London.
 11826      -   “europe-west3”
 11827          -   Frankfurt.
 11828      -   “europe-west4”
 11829          -   Netherlands.
 11830      -   “us-central1”
 11831          -   Iowa.
 11832      -   “us-east1”
 11833          -   South Carolina.
 11834      -   “us-east4”
 11835          -   Northern Virginia.
 11836      -   “us-west1”
 11837          -   Oregon.
 11838      -   “us-west2”
 11839          -   California.
 11840  
 11841  –gcs-storage-class
 11842  
 11843  The storage class to use when storing objects in Google Cloud Storage.
 11844  
 11845  -   Config: storage_class
 11846  -   Env Var: RCLONE_GCS_STORAGE_CLASS
 11847  -   Type: string
 11848  -   Default: ""
 11849  -   Examples:
 11850      -   ""
 11851          -   Default
 11852      -   “MULTI_REGIONAL”
 11853          -   Multi-regional storage class
 11854      -   “REGIONAL”
 11855          -   Regional storage class
 11856      -   “NEARLINE”
 11857          -   Nearline storage class
 11858      -   “COLDLINE”
 11859          -   Coldline storage class
 11860      -   “DURABLE_REDUCED_AVAILABILITY”
 11861          -   Durable reduced availability storage class
 11862  
 11863  
 11864  Google Drive
 11865  
 11866  Paths are specified as drive:path
 11867  
 11868  Drive paths may be as deep as required, eg drive:directory/subdirectory.
 11869  
 11870  The initial setup for drive involves getting a token from Google drive
 11871  which you need to do in your browser. rclone config walks you through
 11872  it.
 11873  
 11874  Here is an example of how to make a remote called remote. First run:
 11875  
 11876       rclone config
 11877  
 11878  This will guide you through an interactive setup process:
 11879  
 11880      No remotes found - make a new one
 11881      n) New remote
 11882      r) Rename remote
 11883      c) Copy remote
 11884      s) Set configuration password
 11885      q) Quit config
 11886      n/r/c/s/q> n
 11887      name> remote
 11888      Type of storage to configure.
 11889      Choose a number from below, or type in your own value
 11890      [snip]
 11891      10 / Google Drive
 11892         \ "drive"
 11893      [snip]
 11894      Storage> drive
 11895      Google Application Client Id - leave blank normally.
 11896      client_id>
 11897      Google Application Client Secret - leave blank normally.
 11898      client_secret>
 11899      Scope that rclone should use when requesting access from drive.
 11900      Choose a number from below, or type in your own value
 11901       1 / Full access all files, excluding Application Data Folder.
 11902         \ "drive"
 11903       2 / Read-only access to file metadata and file contents.
 11904         \ "drive.readonly"
 11905         / Access to files created by rclone only.
 11906       3 | These are visible in the drive website.
 11907         | File authorization is revoked when the user deauthorizes the app.
 11908         \ "drive.file"
 11909         / Allows read and write access to the Application Data folder.
 11910       4 | This is not visible in the drive website.
 11911         \ "drive.appfolder"
 11912         / Allows read-only access to file metadata but
 11913       5 | does not allow any access to read or download file content.
 11914         \ "drive.metadata.readonly"
 11915      scope> 1
 11916      ID of the root folder - leave blank normally.  Fill in to access "Computers" folders. (see docs).
 11917      root_folder_id> 
 11918      Service Account Credentials JSON file path - needed only if you want use SA instead of interactive login.
 11919      service_account_file>
 11920      Remote config
 11921      Use auto config?
 11922       * Say Y if not sure
 11923       * Say N if you are working on a remote or headless machine or Y didn't work
 11924      y) Yes
 11925      n) No
 11926      y/n> y
 11927      If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
 11928      Log in and authorize rclone for access
 11929      Waiting for code...
 11930      Got code
 11931      Configure this as a team drive?
 11932      y) Yes
 11933      n) No
 11934      y/n> n
 11935      --------------------
 11936      [remote]
 11937      client_id = 
 11938      client_secret = 
 11939      scope = drive
 11940      root_folder_id = 
 11941      service_account_file =
 11942      token = {"access_token":"XXX","token_type":"Bearer","refresh_token":"XXX","expiry":"2014-03-16T13:57:58.955387075Z"}
 11943      --------------------
 11944      y) Yes this is OK
 11945      e) Edit this remote
 11946      d) Delete this remote
 11947      y/e/d> y
 11948  
 11949  Note that rclone runs a webserver on your local machine to collect the
 11950  token as returned from Google if you use auto config mode. This only
 11951  runs from the moment it opens your browser to the moment you get back
 11952  the verification code. This is on http://127.0.0.1:53682/ and this it
 11953  may require you to unblock it temporarily if you are running a host
 11954  firewall, or use manual mode.
 11955  
 11956  You can then use it like this,
 11957  
 11958  List directories in top level of your drive
 11959  
 11960      rclone lsd remote:
 11961  
 11962  List all the files in your drive
 11963  
 11964      rclone ls remote:
 11965  
 11966  To copy a local directory to a drive directory called backup
 11967  
 11968      rclone copy /home/source remote:backup
 11969  
 11970  Scopes
 11971  
 11972  Rclone allows you to select which scope you would like for rclone to
 11973  use. This changes what type of token is granted to rclone. The scopes
 11974  are defined here..
 11975  
 11976  The scope are
 11977  
 11978  drive
 11979  
 11980  This is the default scope and allows full access to all files, except
 11981  for the Application Data Folder (see below).
 11982  
 11983  Choose this one if you aren’t sure.
 11984  
 11985  drive.readonly
 11986  
 11987  This allows read only access to all files. Files may be listed and
 11988  downloaded but not uploaded, renamed or deleted.
 11989  
 11990  drive.file
 11991  
 11992  With this scope rclone can read/view/modify only those files and folders
 11993  it creates.
 11994  
 11995  So if you uploaded files to drive via the web interface (or any other
 11996  means) they will not be visible to rclone.
 11997  
 11998  This can be useful if you are using rclone to backup data and you want
 11999  to be sure confidential data on your drive is not visible to rclone.
 12000  
 12001  Files created with this scope are visible in the web interface.
 12002  
 12003  drive.appfolder
 12004  
 12005  This gives rclone its own private area to store files. Rclone will not
 12006  be able to see any other files on your drive and you won’t be able to
 12007  see rclone’s files from the web interface either.
 12008  
 12009  drive.metadata.readonly
 12010  
 12011  This allows read only access to file names only. It does not allow
 12012  rclone to download or upload data, or rename or delete files or
 12013  directories.
 12014  
 12015  Root folder ID
 12016  
 12017  You can set the root_folder_id for rclone. This is the directory
 12018  (identified by its Folder ID) that rclone considers to be the root of
 12019  your drive.
 12020  
 12021  Normally you will leave this blank and rclone will determine the correct
 12022  root to use itself.
 12023  
 12024  However you can set this to restrict rclone to a specific folder
 12025  hierarchy or to access data within the “Computers” tab on the drive web
 12026  interface (where files from Google’s Backup and Sync desktop program
 12027  go).
 12028  
 12029  In order to do this you will have to find the Folder ID of the directory
 12030  you wish rclone to display. This will be the last segment of the URL
 12031  when you open the relevant folder in the drive web interface.
 12032  
 12033  So if the folder you want rclone to use has a URL which looks like
 12034  https://drive.google.com/drive/folders/1XyfxxxxxxxxxxxxxxxxxxxxxxxxxKHCh
 12035  in the browser, then you use 1XyfxxxxxxxxxxxxxxxxxxxxxxxxxKHCh as the
 12036  root_folder_id in the config.
 12037  
 12038  NB folders under the “Computers” tab seem to be read only (drive gives a
 12039  500 error) when using rclone.
 12040  
 12041  There doesn’t appear to be an API to discover the folder IDs of the
 12042  “Computers” tab - please contact us if you know otherwise!
 12043  
 12044  Note also that rclone can’t access any data under the “Backups” tab on
 12045  the google drive web interface yet.
 12046  
 12047  Service Account support
 12048  
 12049  You can set up rclone with Google Drive in an unattended mode, i.e. not
 12050  tied to a specific end-user Google account. This is useful when you want
 12051  to synchronise files onto machines that don’t have actively logged-in
 12052  users, for example build machines.
 12053  
 12054  To use a Service Account instead of OAuth2 token flow, enter the path to
 12055  your Service Account credentials at the service_account_file prompt
 12056  during rclone config and rclone won’t use the browser based
 12057  authentication flow. If you’d rather stuff the contents of the
 12058  credentials file into the rclone config file, you can set
 12059  service_account_credentials with the actual contents of the file
 12060  instead, or set the equivalent environment variable.
 12061  
 12062  Use case - Google Apps/G-suite account and individual Drive
 12063  
 12064  Let’s say that you are the administrator of a Google Apps (old) or
 12065  G-suite account. The goal is to store data on an individual’s Drive
 12066  account, who IS a member of the domain. We’ll call the domain
 12067  EXAMPLE.COM, and the user FOO@EXAMPLE.COM.
 12068  
 12069  There’s a few steps we need to go through to accomplish this:
 12070  
 12071  1. Create a service account for example.com
 12072  
 12073  -   To create a service account and obtain its credentials, go to the
 12074      Google Developer Console.
 12075  -   You must have a project - create one if you don’t.
 12076  -   Then go to “IAM & admin” -> “Service Accounts”.
 12077  -   Use the “Create Credentials” button. Fill in “Service account name”
 12078      with something that identifies your client. “Role” can be empty.
 12079  -   Tick “Furnish a new private key” - select “Key type JSON”.
 12080  -   Tick “Enable G Suite Domain-wide Delegation”. This option makes
 12081      “impersonation” possible, as documented here: Delegating domain-wide
 12082      authority to the service account
 12083  -   These credentials are what rclone will use for authentication. If
 12084      you ever need to remove access, press the “Delete service account
 12085      key” button.
 12086  
 12087  2. Allowing API access to example.com Google Drive
 12088  
 12089  -   Go to example.com’s admin console
 12090  -   Go into “Security” (or use the search bar)
 12091  -   Select “Show more” and then “Advanced settings”
 12092  -   Select “Manage API client access” in the “Authentication” section
 12093  -   In the “Client Name” field enter the service account’s “Client ID” -
 12094      this can be found in the Developer Console under “IAM & Admin” ->
 12095      “Service Accounts”, then “View Client ID” for the newly created
 12096      service account. It is a ~21 character numerical string.
 12097  -   In the next field, “One or More API Scopes”, enter
 12098      https://www.googleapis.com/auth/drive to grant access to Google
 12099      Drive specifically.
 12100  
 12101  3. Configure rclone, assuming a new install
 12102  
 12103      rclone config
 12104  
 12105      n/s/q> n         # New
 12106      name>gdrive      # Gdrive is an example name
 12107      Storage>         # Select the number shown for Google Drive
 12108      client_id>       # Can be left blank
 12109      client_secret>   # Can be left blank
 12110      scope>           # Select your scope, 1 for example
 12111      root_folder_id>  # Can be left blank
 12112      service_account_file> /home/foo/myJSONfile.json # This is where the JSON file goes!
 12113      y/n>             # Auto config, y
 12114  
 12115  4. Verify that it’s working
 12116  
 12117  -   rclone -v --drive-impersonate foo@example.com lsf gdrive:backup
 12118  -   The arguments do:
 12119      -   -v - verbose logging
 12120      -   --drive-impersonate foo@example.com - this is what does the
 12121          magic, pretending to be user foo.
 12122      -   lsf - list files in a parsing friendly way
 12123      -   gdrive:backup - use the remote called gdrive, work in the folder
 12124          named backup.
 12125  
 12126  Team drives
 12127  
 12128  If you want to configure the remote to point to a Google Team Drive then
 12129  answer y to the question Configure this as a team drive?.
 12130  
 12131  This will fetch the list of Team Drives from google and allow you to
 12132  configure which one you want to use. You can also type in a team drive
 12133  ID if you prefer.
 12134  
 12135  For example:
 12136  
 12137      Configure this as a team drive?
 12138      y) Yes
 12139      n) No
 12140      y/n> y
 12141      Fetching team drive list...
 12142      Choose a number from below, or type in your own value
 12143       1 / Rclone Test
 12144         \ "xxxxxxxxxxxxxxxxxxxx"
 12145       2 / Rclone Test 2
 12146         \ "yyyyyyyyyyyyyyyyyyyy"
 12147       3 / Rclone Test 3
 12148         \ "zzzzzzzzzzzzzzzzzzzz"
 12149      Enter a Team Drive ID> 1
 12150      --------------------
 12151      [remote]
 12152      client_id =
 12153      client_secret =
 12154      token = {"AccessToken":"xxxx.x.xxxxx_xxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"1/xxxxxxxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxx","Expiry":"2014-03-16T13:57:58.955387075Z","Extra":null}
 12155      team_drive = xxxxxxxxxxxxxxxxxxxx
 12156      --------------------
 12157      y) Yes this is OK
 12158      e) Edit this remote
 12159      d) Delete this remote
 12160      y/e/d> y
 12161  
 12162  –fast-list
 12163  
 12164  This remote supports --fast-list which allows you to use fewer
 12165  transactions in exchange for more memory. See the rclone docs for more
 12166  details.
 12167  
 12168  It does this by combining multiple list calls into a single API request.
 12169  
 12170  This works by combining many '%s' in parents filters into one
 12171  expression. To list the contents of directories a, b and c, the
 12172  following requests will be send by the regular List function:
 12173  
 12174      trashed=false and 'a' in parents
 12175      trashed=false and 'b' in parents
 12176      trashed=false and 'c' in parents
 12177  
 12178  These can now be combined into a single request:
 12179  
 12180      trashed=false and ('a' in parents or 'b' in parents or 'c' in parents)
 12181  
 12182  The implementation of ListR will put up to 50 parents filters into one
 12183  request. It will use the --checkers value to specify the number of
 12184  requests to run in parallel.
 12185  
 12186  In tests, these batch requests were up to 20x faster than the regular
 12187  method. Running the following command against different sized folders
 12188  gives:
 12189  
 12190      rclone lsjson -vv -R --checkers=6 gdrive:folder
 12191  
 12192  small folder (220 directories, 700 files):
 12193  
 12194  -   without --fast-list: 38s
 12195  -   with --fast-list: 10s
 12196  
 12197  large folder (10600 directories, 39000 files):
 12198  
 12199  -   without --fast-list: 22:05 min
 12200  -   with --fast-list: 58s
 12201  
 12202  Modified time
 12203  
 12204  Google drive stores modification times accurate to 1 ms.
 12205  
 12206  Revisions
 12207  
 12208  Google drive stores revisions of files. When you upload a change to an
 12209  existing file to google drive using rclone it will create a new revision
 12210  of that file.
 12211  
 12212  Revisions follow the standard google policy which at time of writing was
 12213  
 12214  -   They are deleted after 30 days or 100 revisions (whatever comes
 12215      first).
 12216  -   They do not count towards a user storage quota.
 12217  
 12218  Deleting files
 12219  
 12220  By default rclone will send all files to the trash when deleting files.
 12221  If deleting them permanently is required then use the
 12222  --drive-use-trash=false flag, or set the equivalent environment
 12223  variable.
 12224  
 12225  Emptying trash
 12226  
 12227  If you wish to empty your trash you can use the rclone cleanup remote:
 12228  command which will permanently delete all your trashed files. This
 12229  command does not take any path arguments.
 12230  
 12231  Note that Google Drive takes some time (minutes to days) to empty the
 12232  trash even though the command returns within a few seconds. No output is
 12233  echoed, so there will be no confirmation even using -v or -vv.
 12234  
 12235  Quota information
 12236  
 12237  To view your current quota you can use the rclone about remote: command
 12238  which will display your usage limit (quota), the usage in Google Drive,
 12239  the size of all files in the Trash and the space used by other Google
 12240  services such as Gmail. This command does not take any path arguments.
 12241  
 12242  Import/Export of google documents
 12243  
 12244  Google documents can be exported from and uploaded to Google Drive.
 12245  
 12246  When rclone downloads a Google doc it chooses a format to download
 12247  depending upon the --drive-export-formats setting. By default the export
 12248  formats are docx,xlsx,pptx,svg which are a sensible default for an
 12249  editable document.
 12250  
 12251  When choosing a format, rclone runs down the list provided in order and
 12252  chooses the first file format the doc can be exported as from the list.
 12253  If the file can’t be exported to a format on the formats list, then
 12254  rclone will choose a format from the default list.
 12255  
 12256  If you prefer an archive copy then you might use
 12257  --drive-export-formats pdf, or if you prefer openoffice/libreoffice
 12258  formats you might use --drive-export-formats ods,odt,odp.
 12259  
 12260  Note that rclone adds the extension to the google doc, so if it is
 12261  called My Spreadsheet on google docs, it will be exported as
 12262  My Spreadsheet.xlsx or My Spreadsheet.pdf etc.
 12263  
 12264  When importing files into Google Drive, rclone will convert all files
 12265  with an extension in --drive-import-formats to their associated document
 12266  type. rclone will not convert any files by default, since the conversion
 12267  is lossy process.
 12268  
 12269  The conversion must result in a file with the same extension when the
 12270  --drive-export-formats rules are applied to the uploaded document.
 12271  
 12272  Here are some examples for allowed and prohibited conversions.
 12273  
 12274    export-formats   import-formats   Upload Ext   Document Ext   Allowed
 12275    ---------------- ---------------- ------------ -------------- ---------
 12276    odt              odt              odt          odt            Yes
 12277    odt              docx,odt         odt          odt            Yes
 12278                     docx             docx         docx           Yes
 12279                     odt              odt          docx           No
 12280    odt,docx         docx,odt         docx         odt            No
 12281    docx,odt         docx,odt         docx         docx           Yes
 12282    docx,odt         docx,odt         odt          docx           No
 12283  
 12284  This limitation can be disabled by specifying
 12285  --drive-allow-import-name-change. When using this flag, rclone can
 12286  convert multiple files types resulting in the same document type at
 12287  once, eg with --drive-import-formats docx,odt,txt, all files having
 12288  these extension would result in a document represented as a docx file.
 12289  This brings the additional risk of overwriting a document, if multiple
 12290  files have the same stem. Many rclone operations will not handle this
 12291  name change in any way. They assume an equal name when copying files and
 12292  might copy the file again or delete them when the name changes.
 12293  
 12294  Here are the possible export extensions with their corresponding mime
 12295  types. Most of these can also be used for importing, but there more that
 12296  are not listed here. Some of these additional ones might only be
 12297  available when the operating system provides the correct MIME type
 12298  entries.
 12299  
 12300  This list can be changed by Google Drive at any time and might not
 12301  represent the currently available conversions.
 12302  
 12303    --------------------------------------------------------------------------------------------------------------------------
 12304    Extension           Mime Type                                                                   Description
 12305    ------------------- --------------------------------------------------------------------------- --------------------------
 12306    csv                 text/csv                                                                    Standard CSV format for
 12307                                                                                                    Spreadsheets
 12308  
 12309    docx                application/vnd.openxmlformats-officedocument.wordprocessingml.document     Microsoft Office Document
 12310  
 12311    epub                application/epub+zip                                                        E-book format
 12312  
 12313    html                text/html                                                                   An HTML Document
 12314  
 12315    jpg                 image/jpeg                                                                  A JPEG Image File
 12316  
 12317    json                application/vnd.google-apps.script+json                                     JSON Text Format
 12318  
 12319    odp                 application/vnd.oasis.opendocument.presentation                             Openoffice Presentation
 12320  
 12321    ods                 application/vnd.oasis.opendocument.spreadsheet                              Openoffice Spreadsheet
 12322  
 12323    ods                 application/x-vnd.oasis.opendocument.spreadsheet                            Openoffice Spreadsheet
 12324  
 12325    odt                 application/vnd.oasis.opendocument.text                                     Openoffice Document
 12326  
 12327    pdf                 application/pdf                                                             Adobe PDF Format
 12328  
 12329    png                 image/png                                                                   PNG Image Format
 12330  
 12331    pptx                application/vnd.openxmlformats-officedocument.presentationml.presentation   Microsoft Office
 12332                                                                                                    Powerpoint
 12333  
 12334    rtf                 application/rtf                                                             Rich Text Format
 12335  
 12336    svg                 image/svg+xml                                                               Scalable Vector Graphics
 12337                                                                                                    Format
 12338  
 12339    tsv                 text/tab-separated-values                                                   Standard TSV format for
 12340                                                                                                    spreadsheets
 12341  
 12342    txt                 text/plain                                                                  Plain Text
 12343  
 12344    xlsx                application/vnd.openxmlformats-officedocument.spreadsheetml.sheet           Microsoft Office
 12345                                                                                                    Spreadsheet
 12346  
 12347    zip                 application/zip                                                             A ZIP file of HTML, Images
 12348                                                                                                    CSS
 12349    --------------------------------------------------------------------------------------------------------------------------
 12350  
 12351  Google documents can also be exported as link files. These files will
 12352  open a browser window for the Google Docs website of that document when
 12353  opened. The link file extension has to be specified as a
 12354  --drive-export-formats parameter. They will match all available Google
 12355  Documents.
 12356  
 12357    Extension   Description                               OS Support
 12358    ----------- ----------------------------------------- ----------------
 12359    desktop     freedesktop.org specified desktop entry   Linux
 12360    link.html   An HTML Document with a redirect          All
 12361    url         INI style link file                       macOS, Windows
 12362    webloc      macOS specific XML format                 macOS
 12363  
 12364  Standard Options
 12365  
 12366  Here are the standard options specific to drive (Google Drive).
 12367  
 12368  –drive-client-id
 12369  
 12370  Google Application Client Id Setting your own is recommended. See
 12371  https://rclone.org/drive/#making-your-own-client-id for how to create
 12372  your own. If you leave this blank, it will use an internal key which is
 12373  low performance.
 12374  
 12375  -   Config: client_id
 12376  -   Env Var: RCLONE_DRIVE_CLIENT_ID
 12377  -   Type: string
 12378  -   Default: ""
 12379  
 12380  –drive-client-secret
 12381  
 12382  Google Application Client Secret Setting your own is recommended.
 12383  
 12384  -   Config: client_secret
 12385  -   Env Var: RCLONE_DRIVE_CLIENT_SECRET
 12386  -   Type: string
 12387  -   Default: ""
 12388  
 12389  –drive-scope
 12390  
 12391  Scope that rclone should use when requesting access from drive.
 12392  
 12393  -   Config: scope
 12394  -   Env Var: RCLONE_DRIVE_SCOPE
 12395  -   Type: string
 12396  -   Default: ""
 12397  -   Examples:
 12398      -   “drive”
 12399          -   Full access all files, excluding Application Data Folder.
 12400      -   “drive.readonly”
 12401          -   Read-only access to file metadata and file contents.
 12402      -   “drive.file”
 12403          -   Access to files created by rclone only.
 12404          -   These are visible in the drive website.
 12405          -   File authorization is revoked when the user deauthorizes the
 12406              app.
 12407      -   “drive.appfolder”
 12408          -   Allows read and write access to the Application Data folder.
 12409          -   This is not visible in the drive website.
 12410      -   “drive.metadata.readonly”
 12411          -   Allows read-only access to file metadata but
 12412          -   does not allow any access to read or download file content.
 12413  
 12414  –drive-root-folder-id
 12415  
 12416  ID of the root folder Leave blank normally. Fill in to access
 12417  “Computers” folders. (see docs).
 12418  
 12419  -   Config: root_folder_id
 12420  -   Env Var: RCLONE_DRIVE_ROOT_FOLDER_ID
 12421  -   Type: string
 12422  -   Default: ""
 12423  
 12424  –drive-service-account-file
 12425  
 12426  Service Account Credentials JSON file path Leave blank normally. Needed
 12427  only if you want use SA instead of interactive login.
 12428  
 12429  -   Config: service_account_file
 12430  -   Env Var: RCLONE_DRIVE_SERVICE_ACCOUNT_FILE
 12431  -   Type: string
 12432  -   Default: ""
 12433  
 12434  Advanced Options
 12435  
 12436  Here are the advanced options specific to drive (Google Drive).
 12437  
 12438  –drive-service-account-credentials
 12439  
 12440  Service Account Credentials JSON blob Leave blank normally. Needed only
 12441  if you want use SA instead of interactive login.
 12442  
 12443  -   Config: service_account_credentials
 12444  -   Env Var: RCLONE_DRIVE_SERVICE_ACCOUNT_CREDENTIALS
 12445  -   Type: string
 12446  -   Default: ""
 12447  
 12448  –drive-team-drive
 12449  
 12450  ID of the Team Drive
 12451  
 12452  -   Config: team_drive
 12453  -   Env Var: RCLONE_DRIVE_TEAM_DRIVE
 12454  -   Type: string
 12455  -   Default: ""
 12456  
 12457  –drive-auth-owner-only
 12458  
 12459  Only consider files owned by the authenticated user.
 12460  
 12461  -   Config: auth_owner_only
 12462  -   Env Var: RCLONE_DRIVE_AUTH_OWNER_ONLY
 12463  -   Type: bool
 12464  -   Default: false
 12465  
 12466  –drive-use-trash
 12467  
 12468  Send files to the trash instead of deleting permanently. Defaults to
 12469  true, namely sending files to the trash. Use --drive-use-trash=false to
 12470  delete files permanently instead.
 12471  
 12472  -   Config: use_trash
 12473  -   Env Var: RCLONE_DRIVE_USE_TRASH
 12474  -   Type: bool
 12475  -   Default: true
 12476  
 12477  –drive-skip-gdocs
 12478  
 12479  Skip google documents in all listings. If given, gdocs practically
 12480  become invisible to rclone.
 12481  
 12482  -   Config: skip_gdocs
 12483  -   Env Var: RCLONE_DRIVE_SKIP_GDOCS
 12484  -   Type: bool
 12485  -   Default: false
 12486  
 12487  –drive-skip-checksum-gphotos
 12488  
 12489  Skip MD5 checksum on Google photos and videos only.
 12490  
 12491  Use this if you get checksum errors when transferring Google photos or
 12492  videos.
 12493  
 12494  Setting this flag will cause Google photos and videos to return a blank
 12495  MD5 checksum.
 12496  
 12497  Google photos are identifed by being in the “photos” space.
 12498  
 12499  Corrupted checksums are caused by Google modifying the image/video but
 12500  not updating the checksum.
 12501  
 12502  -   Config: skip_checksum_gphotos
 12503  -   Env Var: RCLONE_DRIVE_SKIP_CHECKSUM_GPHOTOS
 12504  -   Type: bool
 12505  -   Default: false
 12506  
 12507  –drive-shared-with-me
 12508  
 12509  Only show files that are shared with me.
 12510  
 12511  Instructs rclone to operate on your “Shared with me” folder (where
 12512  Google Drive lets you access the files and folders others have shared
 12513  with you).
 12514  
 12515  This works both with the “list” (lsd, lsl, etc) and the “copy” commands
 12516  (copy, sync, etc), and with all other commands too.
 12517  
 12518  -   Config: shared_with_me
 12519  -   Env Var: RCLONE_DRIVE_SHARED_WITH_ME
 12520  -   Type: bool
 12521  -   Default: false
 12522  
 12523  –drive-trashed-only
 12524  
 12525  Only show files that are in the trash. This will show trashed files in
 12526  their original directory structure.
 12527  
 12528  -   Config: trashed_only
 12529  -   Env Var: RCLONE_DRIVE_TRASHED_ONLY
 12530  -   Type: bool
 12531  -   Default: false
 12532  
 12533  –drive-formats
 12534  
 12535  Deprecated: see export_formats
 12536  
 12537  -   Config: formats
 12538  -   Env Var: RCLONE_DRIVE_FORMATS
 12539  -   Type: string
 12540  -   Default: ""
 12541  
 12542  –drive-export-formats
 12543  
 12544  Comma separated list of preferred formats for downloading Google docs.
 12545  
 12546  -   Config: export_formats
 12547  -   Env Var: RCLONE_DRIVE_EXPORT_FORMATS
 12548  -   Type: string
 12549  -   Default: “docx,xlsx,pptx,svg”
 12550  
 12551  –drive-import-formats
 12552  
 12553  Comma separated list of preferred formats for uploading Google docs.
 12554  
 12555  -   Config: import_formats
 12556  -   Env Var: RCLONE_DRIVE_IMPORT_FORMATS
 12557  -   Type: string
 12558  -   Default: ""
 12559  
 12560  –drive-allow-import-name-change
 12561  
 12562  Allow the filetype to change when uploading Google docs (e.g. file.doc
 12563  to file.docx). This will confuse sync and reupload every time.
 12564  
 12565  -   Config: allow_import_name_change
 12566  -   Env Var: RCLONE_DRIVE_ALLOW_IMPORT_NAME_CHANGE
 12567  -   Type: bool
 12568  -   Default: false
 12569  
 12570  –drive-use-created-date
 12571  
 12572  Use file created date instead of modified date.,
 12573  
 12574  Useful when downloading data and you want the creation date used in
 12575  place of the last modified date.
 12576  
 12577  WARNING: This flag may have some unexpected consequences.
 12578  
 12579  When uploading to your drive all files will be overwritten unless they
 12580  haven’t been modified since their creation. And the inverse will occur
 12581  while downloading. This side effect can be avoided by using the
 12582  “–checksum” flag.
 12583  
 12584  This feature was implemented to retain photos capture date as recorded
 12585  by google photos. You will first need to check the “Create a Google
 12586  Photos folder” option in your google drive settings. You can then copy
 12587  or move the photos locally and use the date the image was taken
 12588  (created) set as the modification date.
 12589  
 12590  -   Config: use_created_date
 12591  -   Env Var: RCLONE_DRIVE_USE_CREATED_DATE
 12592  -   Type: bool
 12593  -   Default: false
 12594  
 12595  –drive-list-chunk
 12596  
 12597  Size of listing chunk 100-1000. 0 to disable.
 12598  
 12599  -   Config: list_chunk
 12600  -   Env Var: RCLONE_DRIVE_LIST_CHUNK
 12601  -   Type: int
 12602  -   Default: 1000
 12603  
 12604  –drive-impersonate
 12605  
 12606  Impersonate this user when using a service account.
 12607  
 12608  -   Config: impersonate
 12609  -   Env Var: RCLONE_DRIVE_IMPERSONATE
 12610  -   Type: string
 12611  -   Default: ""
 12612  
 12613  –drive-alternate-export
 12614  
 12615  Use alternate export URLs for google documents export.,
 12616  
 12617  If this option is set this instructs rclone to use an alternate set of
 12618  export URLs for drive documents. Users have reported that the official
 12619  export URLs can’t export large documents, whereas these unofficial ones
 12620  can.
 12621  
 12622  See rclone issue #2243 for background, this google drive issue and this
 12623  helpful post.
 12624  
 12625  -   Config: alternate_export
 12626  -   Env Var: RCLONE_DRIVE_ALTERNATE_EXPORT
 12627  -   Type: bool
 12628  -   Default: false
 12629  
 12630  –drive-upload-cutoff
 12631  
 12632  Cutoff for switching to chunked upload
 12633  
 12634  -   Config: upload_cutoff
 12635  -   Env Var: RCLONE_DRIVE_UPLOAD_CUTOFF
 12636  -   Type: SizeSuffix
 12637  -   Default: 8M
 12638  
 12639  –drive-chunk-size
 12640  
 12641  Upload chunk size. Must a power of 2 >= 256k.
 12642  
 12643  Making this larger will improve performance, but note that each chunk is
 12644  buffered in memory one per transfer.
 12645  
 12646  Reducing this will reduce memory usage but decrease performance.
 12647  
 12648  -   Config: chunk_size
 12649  -   Env Var: RCLONE_DRIVE_CHUNK_SIZE
 12650  -   Type: SizeSuffix
 12651  -   Default: 8M
 12652  
 12653  –drive-acknowledge-abuse
 12654  
 12655  Set to allow files which return cannotDownloadAbusiveFile to be
 12656  downloaded.
 12657  
 12658  If downloading a file returns the error “This file has been identified
 12659  as malware or spam and cannot be downloaded” with the error code
 12660  “cannotDownloadAbusiveFile” then supply this flag to rclone to indicate
 12661  you acknowledge the risks of downloading the file and rclone will
 12662  download it anyway.
 12663  
 12664  -   Config: acknowledge_abuse
 12665  -   Env Var: RCLONE_DRIVE_ACKNOWLEDGE_ABUSE
 12666  -   Type: bool
 12667  -   Default: false
 12668  
 12669  –drive-keep-revision-forever
 12670  
 12671  Keep new head revision of each file forever.
 12672  
 12673  -   Config: keep_revision_forever
 12674  -   Env Var: RCLONE_DRIVE_KEEP_REVISION_FOREVER
 12675  -   Type: bool
 12676  -   Default: false
 12677  
 12678  –drive-size-as-quota
 12679  
 12680  Show storage quota usage for file size.
 12681  
 12682  The storage used by a file is the size of the current version plus any
 12683  older versions that have been set to keep forever.
 12684  
 12685  -   Config: size_as_quota
 12686  -   Env Var: RCLONE_DRIVE_SIZE_AS_QUOTA
 12687  -   Type: bool
 12688  -   Default: false
 12689  
 12690  –drive-v2-download-min-size
 12691  
 12692  If Object’s are greater, use drive v2 API to download.
 12693  
 12694  -   Config: v2_download_min_size
 12695  -   Env Var: RCLONE_DRIVE_V2_DOWNLOAD_MIN_SIZE
 12696  -   Type: SizeSuffix
 12697  -   Default: off
 12698  
 12699  –drive-pacer-min-sleep
 12700  
 12701  Minimum time to sleep between API calls.
 12702  
 12703  -   Config: pacer_min_sleep
 12704  -   Env Var: RCLONE_DRIVE_PACER_MIN_SLEEP
 12705  -   Type: Duration
 12706  -   Default: 100ms
 12707  
 12708  –drive-pacer-burst
 12709  
 12710  Number of API calls to allow without sleeping.
 12711  
 12712  -   Config: pacer_burst
 12713  -   Env Var: RCLONE_DRIVE_PACER_BURST
 12714  -   Type: int
 12715  -   Default: 100
 12716  
 12717  –drive-server-side-across-configs
 12718  
 12719  Allow server side operations (eg copy) to work across different drive
 12720  configs.
 12721  
 12722  This can be useful if you wish to do a server side copy between two
 12723  different Google drives. Note that this isn’t enabled by default because
 12724  it isn’t easy to tell if it will work beween any two configurations.
 12725  
 12726  -   Config: server_side_across_configs
 12727  -   Env Var: RCLONE_DRIVE_SERVER_SIDE_ACROSS_CONFIGS
 12728  -   Type: bool
 12729  -   Default: false
 12730  
 12731  Limitations
 12732  
 12733  Drive has quite a lot of rate limiting. This causes rclone to be limited
 12734  to transferring about 2 files per second only. Individual files may be
 12735  transferred much faster at 100s of MBytes/s but lots of small files can
 12736  take a long time.
 12737  
 12738  Server side copies are also subject to a separate rate limit. If you see
 12739  User rate limit exceeded errors, wait at least 24 hours and retry. You
 12740  can disable server side copies with --disable copy to download and
 12741  upload the files if you prefer.
 12742  
 12743  Limitations of Google Docs
 12744  
 12745  Google docs will appear as size -1 in rclone ls and as size 0 in
 12746  anything which uses the VFS layer, eg rclone mount, rclone serve.
 12747  
 12748  This is because rclone can’t find out the size of the Google docs
 12749  without downloading them.
 12750  
 12751  Google docs will transfer correctly with rclone sync, rclone copy etc as
 12752  rclone knows to ignore the size when doing the transfer.
 12753  
 12754  However an unfortunate consequence of this is that you can’t download
 12755  Google docs using rclone mount - you will get a 0 sized file. If you try
 12756  again the doc may gain its correct size and be downloadable.
 12757  
 12758  Duplicated files
 12759  
 12760  Sometimes, for no reason I’ve been able to track down, drive will
 12761  duplicate a file that rclone uploads. Drive unlike all the other remotes
 12762  can have duplicated files.
 12763  
 12764  Duplicated files cause problems with the syncing and you will see
 12765  messages in the log about duplicates.
 12766  
 12767  Use rclone dedupe to fix duplicated files.
 12768  
 12769  Note that this isn’t just a problem with rclone, even Google Photos on
 12770  Android duplicates files on drive sometimes.
 12771  
 12772  Rclone appears to be re-copying files it shouldn’t
 12773  
 12774  The most likely cause of this is the duplicated file issue above - run
 12775  rclone dedupe and check your logs for duplicate object or directory
 12776  messages.
 12777  
 12778  This can also be caused by a delay/caching on google drive’s end when
 12779  comparing directory listings. Specifically with team drives used in
 12780  combination with –fast-list. Files that were uploaded recently may not
 12781  appear on the directory list sent to rclone when using –fast-list.
 12782  
 12783  Waiting a moderate period of time between attempts (estimated to be
 12784  approximately 1 hour) and/or not using –fast-list both seem to be
 12785  effective in preventing the problem.
 12786  
 12787  Making your own client_id
 12788  
 12789  When you use rclone with Google drive in its default configuration you
 12790  are using rclone’s client_id. This is shared between all the rclone
 12791  users. There is a global rate limit on the number of queries per second
 12792  that each client_id can do set by Google. rclone already has a high
 12793  quota and I will continue to make sure it is high enough by contacting
 12794  Google.
 12795  
 12796  It is strongly recommended to use your own client ID as the default
 12797  rclone ID is heavily used. If you have multiple services running, it is
 12798  recommended to use an API key for each service. The default Google quota
 12799  is 10 transactions per second so it is recommended to stay under that
 12800  number as if you use more than that, it will cause rclone to rate limit
 12801  and make things slower.
 12802  
 12803  Here is how to create your own Google Drive client ID for rclone:
 12804  
 12805  1.  Log into the Google API Console with your Google account. It doesn’t
 12806      matter what Google account you use. (It need not be the same account
 12807      as the Google Drive you want to access)
 12808  
 12809  2.  Select a project or create a new project.
 12810  
 12811  3.  Under “ENABLE APIS AND SERVICES” search for “Drive”, and enable the
 12812      then “Google Drive API”.
 12813  
 12814  4.  Click “Credentials” in the left-side panel (not “Create
 12815      credentials”, which opens the wizard), then “Create credentials”,
 12816      then “OAuth client ID”. It will prompt you to set the OAuth consent
 12817      screen product name, if you haven’t set one already.
 12818  
 12819  5.  Choose an application type of “other”, and click “Create”. (the
 12820      default name is fine)
 12821  
 12822  6.  It will show you a client ID and client secret. Use these values in
 12823      rclone config to add a new remote or edit an existing remote.
 12824  
 12825  (Thanks to @balazer on github for these instructions.)
 12826  
 12827  
 12828  HTTP
 12829  
 12830  The HTTP remote is a read only remote for reading files of a webserver.
 12831  The webserver should provide file listings which rclone will read and
 12832  turn into a remote. This has been tested with common webservers such as
 12833  Apache/Nginx/Caddy and will likely work with file listings from most web
 12834  servers. (If it doesn’t then please file an issue, or send a pull
 12835  request!)
 12836  
 12837  Paths are specified as remote: or remote:path/to/dir.
 12838  
 12839  Here is an example of how to make a remote called remote. First run:
 12840  
 12841       rclone config
 12842  
 12843  This will guide you through an interactive setup process:
 12844  
 12845      No remotes found - make a new one
 12846      n) New remote
 12847      s) Set configuration password
 12848      q) Quit config
 12849      n/s/q> n
 12850      name> remote
 12851      Type of storage to configure.
 12852      Choose a number from below, or type in your own value
 12853       1 / Amazon Drive
 12854         \ "amazon cloud drive"
 12855       2 / Amazon S3 (also Dreamhost, Ceph, Minio)
 12856         \ "s3"
 12857       3 / Backblaze B2
 12858         \ "b2"
 12859       4 / Dropbox
 12860         \ "dropbox"
 12861       5 / Encrypt/Decrypt a remote
 12862         \ "crypt"
 12863       6 / FTP Connection
 12864         \ "ftp"
 12865       7 / Google Cloud Storage (this is not Google Drive)
 12866         \ "google cloud storage"
 12867       8 / Google Drive
 12868         \ "drive"
 12869       9 / Hubic
 12870         \ "hubic"
 12871      10 / Local Disk
 12872         \ "local"
 12873      11 / Microsoft OneDrive
 12874         \ "onedrive"
 12875      12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
 12876         \ "swift"
 12877      13 / SSH/SFTP Connection
 12878         \ "sftp"
 12879      14 / Yandex Disk
 12880         \ "yandex"
 12881      15 / http Connection
 12882         \ "http"
 12883      Storage> http
 12884      URL of http host to connect to
 12885      Choose a number from below, or type in your own value
 12886       1 / Connect to example.com
 12887         \ "https://example.com"
 12888      url> https://beta.rclone.org
 12889      Remote config
 12890      --------------------
 12891      [remote]
 12892      url = https://beta.rclone.org
 12893      --------------------
 12894      y) Yes this is OK
 12895      e) Edit this remote
 12896      d) Delete this remote
 12897      y/e/d> y
 12898      Current remotes:
 12899  
 12900      Name                 Type
 12901      ====                 ====
 12902      remote               http
 12903  
 12904      e) Edit existing remote
 12905      n) New remote
 12906      d) Delete remote
 12907      r) Rename remote
 12908      c) Copy remote
 12909      s) Set configuration password
 12910      q) Quit config
 12911      e/n/d/r/c/s/q> q
 12912  
 12913  This remote is called remote and can now be used like this
 12914  
 12915  See all the top level directories
 12916  
 12917      rclone lsd remote:
 12918  
 12919  List the contents of a directory
 12920  
 12921      rclone ls remote:directory
 12922  
 12923  Sync the remote directory to /home/local/directory, deleting any excess
 12924  files.
 12925  
 12926      rclone sync remote:directory /home/local/directory
 12927  
 12928  Read only
 12929  
 12930  This remote is read only - you can’t upload files to an HTTP server.
 12931  
 12932  Modified time
 12933  
 12934  Most HTTP servers store time accurate to 1 second.
 12935  
 12936  Checksum
 12937  
 12938  No checksums are stored.
 12939  
 12940  Usage without a config file
 12941  
 12942  Since the http remote only has one config parameter it is easy to use
 12943  without a config file:
 12944  
 12945      rclone lsd --http-url https://beta.rclone.org :http:
 12946  
 12947  Standard Options
 12948  
 12949  Here are the standard options specific to http (http Connection).
 12950  
 12951  –http-url
 12952  
 12953  URL of http host to connect to
 12954  
 12955  -   Config: url
 12956  -   Env Var: RCLONE_HTTP_URL
 12957  -   Type: string
 12958  -   Default: ""
 12959  -   Examples:
 12960      -   “https://example.com”
 12961          -   Connect to example.com
 12962      -   “https://user:pass@example.com”
 12963          -   Connect to example.com using a username and password
 12964  
 12965  Advanced Options
 12966  
 12967  Here are the advanced options specific to http (http Connection).
 12968  
 12969  –http-no-slash
 12970  
 12971  Set this if the site doesn’t end directories with /
 12972  
 12973  Use this if your target website does not use / on the end of
 12974  directories.
 12975  
 12976  A / on the end of a path is how rclone normally tells the difference
 12977  between files and directories. If this flag is set, then rclone will
 12978  treat all files with Content-Type: text/html as directories and read
 12979  URLs from them rather than downloading them.
 12980  
 12981  Note that this may cause rclone to confuse genuine HTML files with
 12982  directories.
 12983  
 12984  -   Config: no_slash
 12985  -   Env Var: RCLONE_HTTP_NO_SLASH
 12986  -   Type: bool
 12987  -   Default: false
 12988  
 12989  
 12990  Hubic
 12991  
 12992  Paths are specified as remote:path
 12993  
 12994  Paths are specified as remote:container (or remote: for the lsd
 12995  command.) You may put subdirectories in too, eg
 12996  remote:container/path/to/dir.
 12997  
 12998  The initial setup for Hubic involves getting a token from Hubic which
 12999  you need to do in your browser. rclone config walks you through it.
 13000  
 13001  Here is an example of how to make a remote called remote. First run:
 13002  
 13003       rclone config
 13004  
 13005  This will guide you through an interactive setup process:
 13006  
 13007      n) New remote
 13008      s) Set configuration password
 13009      n/s> n
 13010      name> remote
 13011      Type of storage to configure.
 13012      Choose a number from below, or type in your own value
 13013       1 / Amazon Drive
 13014         \ "amazon cloud drive"
 13015       2 / Amazon S3 (also Dreamhost, Ceph, Minio)
 13016         \ "s3"
 13017       3 / Backblaze B2
 13018         \ "b2"
 13019       4 / Dropbox
 13020         \ "dropbox"
 13021       5 / Encrypt/Decrypt a remote
 13022         \ "crypt"
 13023       6 / Google Cloud Storage (this is not Google Drive)
 13024         \ "google cloud storage"
 13025       7 / Google Drive
 13026         \ "drive"
 13027       8 / Hubic
 13028         \ "hubic"
 13029       9 / Local Disk
 13030         \ "local"
 13031      10 / Microsoft OneDrive
 13032         \ "onedrive"
 13033      11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
 13034         \ "swift"
 13035      12 / SSH/SFTP Connection
 13036         \ "sftp"
 13037      13 / Yandex Disk
 13038         \ "yandex"
 13039      Storage> 8
 13040      Hubic Client Id - leave blank normally.
 13041      client_id>
 13042      Hubic Client Secret - leave blank normally.
 13043      client_secret>
 13044      Remote config
 13045      Use auto config?
 13046       * Say Y if not sure
 13047       * Say N if you are working on a remote or headless machine
 13048      y) Yes
 13049      n) No
 13050      y/n> y
 13051      If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
 13052      Log in and authorize rclone for access
 13053      Waiting for code...
 13054      Got code
 13055      --------------------
 13056      [remote]
 13057      client_id =
 13058      client_secret =
 13059      token = {"access_token":"XXXXXX"}
 13060      --------------------
 13061      y) Yes this is OK
 13062      e) Edit this remote
 13063      d) Delete this remote
 13064      y/e/d> y
 13065  
 13066  See the remote setup docs for how to set it up on a machine with no
 13067  Internet browser available.
 13068  
 13069  Note that rclone runs a webserver on your local machine to collect the
 13070  token as returned from Hubic. This only runs from the moment it opens
 13071  your browser to the moment you get back the verification code. This is
 13072  on http://127.0.0.1:53682/ and this it may require you to unblock it
 13073  temporarily if you are running a host firewall.
 13074  
 13075  Once configured you can then use rclone like this,
 13076  
 13077  List containers in the top level of your Hubic
 13078  
 13079      rclone lsd remote:
 13080  
 13081  List all the files in your Hubic
 13082  
 13083      rclone ls remote:
 13084  
 13085  To copy a local directory to an Hubic directory called backup
 13086  
 13087      rclone copy /home/source remote:backup
 13088  
 13089  If you want the directory to be visible in the official _Hubic browser_,
 13090  you need to copy your files to the default directory
 13091  
 13092      rclone copy /home/source remote:default/backup
 13093  
 13094  –fast-list
 13095  
 13096  This remote supports --fast-list which allows you to use fewer
 13097  transactions in exchange for more memory. See the rclone docs for more
 13098  details.
 13099  
 13100  Modified time
 13101  
 13102  The modified time is stored as metadata on the object as
 13103  X-Object-Meta-Mtime as floating point since the epoch accurate to 1 ns.
 13104  
 13105  This is a de facto standard (used in the official python-swiftclient
 13106  amongst others) for storing the modification time for an object.
 13107  
 13108  Note that Hubic wraps the Swift backend, so most of the properties of
 13109  are the same.
 13110  
 13111  Standard Options
 13112  
 13113  Here are the standard options specific to hubic (Hubic).
 13114  
 13115  –hubic-client-id
 13116  
 13117  Hubic Client Id Leave blank normally.
 13118  
 13119  -   Config: client_id
 13120  -   Env Var: RCLONE_HUBIC_CLIENT_ID
 13121  -   Type: string
 13122  -   Default: ""
 13123  
 13124  –hubic-client-secret
 13125  
 13126  Hubic Client Secret Leave blank normally.
 13127  
 13128  -   Config: client_secret
 13129  -   Env Var: RCLONE_HUBIC_CLIENT_SECRET
 13130  -   Type: string
 13131  -   Default: ""
 13132  
 13133  Advanced Options
 13134  
 13135  Here are the advanced options specific to hubic (Hubic).
 13136  
 13137  –hubic-chunk-size
 13138  
 13139  Above this size files will be chunked into a _segments container.
 13140  
 13141  Above this size files will be chunked into a _segments container. The
 13142  default for this is 5GB which is its maximum value.
 13143  
 13144  -   Config: chunk_size
 13145  -   Env Var: RCLONE_HUBIC_CHUNK_SIZE
 13146  -   Type: SizeSuffix
 13147  -   Default: 5G
 13148  
 13149  –hubic-no-chunk
 13150  
 13151  Don’t chunk files during streaming upload.
 13152  
 13153  When doing streaming uploads (eg using rcat or mount) setting this flag
 13154  will cause the swift backend to not upload chunked files.
 13155  
 13156  This will limit the maximum upload size to 5GB. However non chunked
 13157  files are easier to deal with and have an MD5SUM.
 13158  
 13159  Rclone will still chunk files bigger than chunk_size when doing normal
 13160  copy operations.
 13161  
 13162  -   Config: no_chunk
 13163  -   Env Var: RCLONE_HUBIC_NO_CHUNK
 13164  -   Type: bool
 13165  -   Default: false
 13166  
 13167  Limitations
 13168  
 13169  This uses the normal OpenStack Swift mechanism to refresh the Swift API
 13170  credentials and ignores the expires field returned by the Hubic API.
 13171  
 13172  The Swift API doesn’t return a correct MD5SUM for segmented files
 13173  (Dynamic or Static Large Objects) so rclone won’t check or use the
 13174  MD5SUM for these.
 13175  
 13176  
 13177  Jottacloud
 13178  
 13179  Paths are specified as remote:path
 13180  
 13181  Paths may be as deep as required, eg remote:directory/subdirectory.
 13182  
 13183  To configure Jottacloud you will need to enter your username and
 13184  password and select a mountpoint.
 13185  
 13186  Here is an example of how to make a remote called remote. First run:
 13187  
 13188       rclone config
 13189  
 13190  This will guide you through an interactive setup process:
 13191  
 13192      No remotes found - make a new one
 13193      n) New remote
 13194      s) Set configuration password
 13195      q) Quit config
 13196      n/s/q> n
 13197      name> jotta
 13198      Type of storage to configure.
 13199      Enter a string value. Press Enter for the default ("").
 13200      Choose a number from below, or type in your own value
 13201      [snip]
 13202      14 / JottaCloud
 13203         \ "jottacloud"
 13204      [snip]
 13205      Storage> jottacloud
 13206      ** See help for jottacloud backend at: https://rclone.org/jottacloud/ **
 13207  
 13208      User Name:
 13209      Enter a string value. Press Enter for the default ("").
 13210      user> user@email.tld
 13211      Edit advanced config? (y/n)
 13212      y) Yes
 13213      n) No
 13214      y/n> n
 13215      Remote config
 13216  
 13217      Do you want to create a machine specific API key?
 13218  
 13219      Rclone has it's own Jottacloud API KEY which works fine as long as one only uses rclone on a single machine. When you want to use rclone with this account on more than one machine it's recommended to create a machine specific API key. These keys can NOT be shared between machines.
 13220  
 13221      y) Yes
 13222      n) No
 13223      y/n> y
 13224      Your Jottacloud password is only required during setup and will not be stored.
 13225      password:
 13226  
 13227      Do you want to use a non standard device/mountpoint e.g. for accessing files uploaded using the official Jottacloud client?
 13228  
 13229      y) Yes
 13230      n) No
 13231      y/n> y
 13232      Please select the device to use. Normally this will be Jotta
 13233      Choose a number from below, or type in an existing value
 13234       1 > DESKTOP-3H31129
 13235       2 > test1
 13236       3 > Jotta
 13237      Devices> 3
 13238      Please select the mountpoint to user. Normally this will be Archive
 13239      Choose a number from below, or type in an existing value
 13240       1 > Archive
 13241       2 > Shared
 13242       3 > Sync
 13243      Mountpoints> 1
 13244      --------------------
 13245      [jotta]
 13246      type = jottacloud
 13247      user = 0xC4KE@gmail.com
 13248      client_id = .....
 13249      client_secret = ........
 13250      token = {........}
 13251      device = Jotta
 13252      mountpoint = Archive
 13253      --------------------
 13254      y) Yes this is OK
 13255      e) Edit this remote
 13256      d) Delete this remote
 13257      y/e/d> y
 13258  
 13259  Once configured you can then use rclone like this,
 13260  
 13261  List directories in top level of your Jottacloud
 13262  
 13263      rclone lsd remote:
 13264  
 13265  List all the files in your Jottacloud
 13266  
 13267      rclone ls remote:
 13268  
 13269  To copy a local directory to an Jottacloud directory called backup
 13270  
 13271      rclone copy /home/source remote:backup
 13272  
 13273  Devices and Mountpoints
 13274  
 13275  The official Jottacloud client registers a device for each computer you
 13276  install it on and then creates a mountpoint for each folder you select
 13277  for Backup. The web interface uses a special device called Jotta for the
 13278  Archive, Sync and Shared mountpoints. In most cases you’ll want to use
 13279  the Jotta/Archive device/mounpoint however if you want to access files
 13280  uploaded by the official rclone provides the option to select other
 13281  devices and mountpoints during config.
 13282  
 13283  –fast-list
 13284  
 13285  This remote supports --fast-list which allows you to use fewer
 13286  transactions in exchange for more memory. See the rclone docs for more
 13287  details.
 13288  
 13289  Note that the implementation in Jottacloud always uses only a single API
 13290  request to get the entire list, so for large folders this could lead to
 13291  long wait time before the first results are shown.
 13292  
 13293  Modified time and hashes
 13294  
 13295  Jottacloud allows modification times to be set on objects accurate to 1
 13296  second. These will be used to detect whether objects need syncing or
 13297  not.
 13298  
 13299  Jottacloud supports MD5 type hashes, so you can use the --checksum flag.
 13300  
 13301  Note that Jottacloud requires the MD5 hash before upload so if the
 13302  source does not have an MD5 checksum then the file will be cached
 13303  temporarily on disk (wherever the TMPDIR environment variable points to)
 13304  before it is uploaded. Small files will be cached in memory - see the
 13305  --jottacloud-md5-memory-limit flag.
 13306  
 13307  Deleting files
 13308  
 13309  By default rclone will send all files to the trash when deleting files.
 13310  Due to a lack of API documentation emptying the trash is currently only
 13311  possible via the Jottacloud website. If deleting permanently is required
 13312  then use the --jottacloud-hard-delete flag, or set the equivalent
 13313  environment variable.
 13314  
 13315  Versions
 13316  
 13317  Jottacloud supports file versioning. When rclone uploads a new version
 13318  of a file it creates a new version of it. Currently rclone only supports
 13319  retrieving the current version but older versions can be accessed via
 13320  the Jottacloud Website.
 13321  
 13322  Quota information
 13323  
 13324  To view your current quota you can use the rclone about remote: command
 13325  which will display your usage limit (unless it is unlimited) and the
 13326  current usage.
 13327  
 13328  Device IDs
 13329  
 13330  Jottacloud requires each ‘device’ to be registered. Rclone brings such a
 13331  registration to easily access your account but if you want to use
 13332  Jottacloud together with rclone on multiple machines you NEED to create
 13333  a seperate deviceID/deviceSecrect on each machine. You will asked during
 13334  setting up the remote. Please be aware that this also means that copying
 13335  the rclone config from one machine to another does NOT work with
 13336  Jottacloud accounts. You have to create it on each machine.
 13337  
 13338  Standard Options
 13339  
 13340  Here are the standard options specific to jottacloud (JottaCloud).
 13341  
 13342  –jottacloud-user
 13343  
 13344  User Name:
 13345  
 13346  -   Config: user
 13347  -   Env Var: RCLONE_JOTTACLOUD_USER
 13348  -   Type: string
 13349  -   Default: ""
 13350  
 13351  Advanced Options
 13352  
 13353  Here are the advanced options specific to jottacloud (JottaCloud).
 13354  
 13355  –jottacloud-md5-memory-limit
 13356  
 13357  Files bigger than this will be cached on disk to calculate the MD5 if
 13358  required.
 13359  
 13360  -   Config: md5_memory_limit
 13361  -   Env Var: RCLONE_JOTTACLOUD_MD5_MEMORY_LIMIT
 13362  -   Type: SizeSuffix
 13363  -   Default: 10M
 13364  
 13365  –jottacloud-hard-delete
 13366  
 13367  Delete files permanently rather than putting them into the trash.
 13368  
 13369  -   Config: hard_delete
 13370  -   Env Var: RCLONE_JOTTACLOUD_HARD_DELETE
 13371  -   Type: bool
 13372  -   Default: false
 13373  
 13374  –jottacloud-unlink
 13375  
 13376  Remove existing public link to file/folder with link command rather than
 13377  creating. Default is false, meaning link command will create or retrieve
 13378  public link.
 13379  
 13380  -   Config: unlink
 13381  -   Env Var: RCLONE_JOTTACLOUD_UNLINK
 13382  -   Type: bool
 13383  -   Default: false
 13384  
 13385  –jottacloud-upload-resume-limit
 13386  
 13387  Files bigger than this can be resumed if the upload fail’s.
 13388  
 13389  -   Config: upload_resume_limit
 13390  -   Env Var: RCLONE_JOTTACLOUD_UPLOAD_RESUME_LIMIT
 13391  -   Type: SizeSuffix
 13392  -   Default: 10M
 13393  
 13394  Limitations
 13395  
 13396  Note that Jottacloud is case insensitive so you can’t have a file called
 13397  “Hello.doc” and one called “hello.doc”.
 13398  
 13399  There are quite a few characters that can’t be in Jottacloud file names.
 13400  Rclone will map these names to and from an identical looking unicode
 13401  equivalent. For example if a file has a ? in it will be mapped to ?
 13402  instead.
 13403  
 13404  Jottacloud only supports filenames up to 255 characters in length.
 13405  
 13406  Troubleshooting
 13407  
 13408  Jottacloud exhibits some inconsistent behaviours regarding deleted files
 13409  and folders which may cause Copy, Move and DirMove operations to
 13410  previously deleted paths to fail. Emptying the trash should help in such
 13411  cases.
 13412  
 13413  
 13414  Koofr
 13415  
 13416  Paths are specified as remote:path
 13417  
 13418  Paths may be as deep as required, eg remote:directory/subdirectory.
 13419  
 13420  The initial setup for Koofr involves creating an application password
 13421  for rclone. You can do that by opening the Koofr web application, giving
 13422  the password a nice name like rclone and clicking on generate.
 13423  
 13424  Here is an example of how to make a remote called koofr. First run:
 13425  
 13426       rclone config
 13427  
 13428  This will guide you through an interactive setup process:
 13429  
 13430      No remotes found - make a new one
 13431      n) New remote
 13432      s) Set configuration password
 13433      q) Quit config
 13434      n/s/q> n
 13435      name> koofr 
 13436      Type of storage to configure.
 13437      Enter a string value. Press Enter for the default ("").
 13438      Choose a number from below, or type in your own value
 13439       1 / A stackable unification remote, which can appear to merge the contents of several remotes
 13440         \ "union"
 13441       2 / Alias for an existing remote
 13442         \ "alias"
 13443       3 / Amazon Drive
 13444         \ "amazon cloud drive"
 13445       4 / Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, etc)
 13446         \ "s3"
 13447       5 / Backblaze B2
 13448         \ "b2"
 13449       6 / Box
 13450         \ "box"
 13451       7 / Cache a remote
 13452         \ "cache"
 13453       8 / Dropbox
 13454         \ "dropbox"
 13455       9 / Encrypt/Decrypt a remote
 13456         \ "crypt"
 13457      10 / FTP Connection
 13458         \ "ftp"
 13459      11 / Google Cloud Storage (this is not Google Drive)
 13460         \ "google cloud storage"
 13461      12 / Google Drive
 13462         \ "drive"
 13463      13 / Hubic
 13464         \ "hubic"
 13465      14 / JottaCloud
 13466         \ "jottacloud"
 13467      15 / Koofr
 13468         \ "koofr"
 13469      16 / Local Disk
 13470         \ "local"
 13471      17 / Mega
 13472         \ "mega"
 13473      18 / Microsoft Azure Blob Storage
 13474         \ "azureblob"
 13475      19 / Microsoft OneDrive
 13476         \ "onedrive"
 13477      20 / OpenDrive
 13478         \ "opendrive"
 13479      21 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
 13480         \ "swift"
 13481      22 / Pcloud
 13482         \ "pcloud"
 13483      23 / QingCloud Object Storage
 13484         \ "qingstor"
 13485      24 / SSH/SFTP Connection
 13486         \ "sftp"
 13487      25 / Webdav
 13488         \ "webdav"
 13489      26 / Yandex Disk
 13490         \ "yandex"
 13491      27 / http Connection
 13492         \ "http"
 13493      Storage> koofr
 13494      ** See help for koofr backend at: https://rclone.org/koofr/ **
 13495  
 13496      Your Koofr user name
 13497      Enter a string value. Press Enter for the default ("").
 13498      user> USER@NAME
 13499      Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password)
 13500      y) Yes type in my own password
 13501      g) Generate random password
 13502      y/g> y
 13503      Enter the password:
 13504      password:
 13505      Confirm the password:
 13506      password:
 13507      Edit advanced config? (y/n)
 13508      y) Yes
 13509      n) No
 13510      y/n> n
 13511      Remote config
 13512      --------------------
 13513      [koofr]
 13514      type = koofr
 13515      baseurl = https://app.koofr.net
 13516      user = USER@NAME
 13517      password = *** ENCRYPTED ***
 13518      --------------------
 13519      y) Yes this is OK
 13520      e) Edit this remote
 13521      d) Delete this remote
 13522      y/e/d> y
 13523  
 13524  You can choose to edit advanced config in order to enter your own
 13525  service URL if you use an on-premise or white label Koofr instance, or
 13526  choose an alternative mount instead of your primary storage.
 13527  
 13528  Once configured you can then use rclone like this,
 13529  
 13530  List directories in top level of your Koofr
 13531  
 13532      rclone lsd koofr:
 13533  
 13534  List all the files in your Koofr
 13535  
 13536      rclone ls koofr:
 13537  
 13538  To copy a local directory to an Koofr directory called backup
 13539  
 13540      rclone copy /home/source remote:backup
 13541  
 13542  Standard Options
 13543  
 13544  Here are the standard options specific to koofr (Koofr).
 13545  
 13546  –koofr-user
 13547  
 13548  Your Koofr user name
 13549  
 13550  -   Config: user
 13551  -   Env Var: RCLONE_KOOFR_USER
 13552  -   Type: string
 13553  -   Default: ""
 13554  
 13555  –koofr-password
 13556  
 13557  Your Koofr password for rclone (generate one at
 13558  https://app.koofr.net/app/admin/preferences/password)
 13559  
 13560  -   Config: password
 13561  -   Env Var: RCLONE_KOOFR_PASSWORD
 13562  -   Type: string
 13563  -   Default: ""
 13564  
 13565  Advanced Options
 13566  
 13567  Here are the advanced options specific to koofr (Koofr).
 13568  
 13569  –koofr-endpoint
 13570  
 13571  The Koofr API endpoint to use
 13572  
 13573  -   Config: endpoint
 13574  -   Env Var: RCLONE_KOOFR_ENDPOINT
 13575  -   Type: string
 13576  -   Default: “https://app.koofr.net”
 13577  
 13578  –koofr-mountid
 13579  
 13580  Mount ID of the mount to use. If omitted, the primary mount is used.
 13581  
 13582  -   Config: mountid
 13583  -   Env Var: RCLONE_KOOFR_MOUNTID
 13584  -   Type: string
 13585  -   Default: ""
 13586  
 13587  Limitations
 13588  
 13589  Note that Koofr is case insensitive so you can’t have a file called
 13590  “Hello.doc” and one called “hello.doc”.
 13591  
 13592  
 13593  Mega
 13594  
 13595  Mega is a cloud storage and file hosting service known for its security
 13596  feature where all files are encrypted locally before they are uploaded.
 13597  This prevents anyone (including employees of Mega) from accessing the
 13598  files without knowledge of the key used for encryption.
 13599  
 13600  This is an rclone backend for Mega which supports the file transfer
 13601  features of Mega using the same client side encryption.
 13602  
 13603  Paths are specified as remote:path
 13604  
 13605  Paths may be as deep as required, eg remote:directory/subdirectory.
 13606  
 13607  Here is an example of how to make a remote called remote. First run:
 13608  
 13609       rclone config
 13610  
 13611  This will guide you through an interactive setup process:
 13612  
 13613      No remotes found - make a new one
 13614      n) New remote
 13615      s) Set configuration password
 13616      q) Quit config
 13617      n/s/q> n
 13618      name> remote
 13619      Type of storage to configure.
 13620      Choose a number from below, or type in your own value
 13621       1 / Alias for an existing remote
 13622         \ "alias"
 13623      [snip]
 13624      14 / Mega
 13625         \ "mega"
 13626      [snip]
 13627      23 / http Connection
 13628         \ "http"
 13629      Storage> mega
 13630      User name
 13631      user> you@example.com
 13632      Password.
 13633      y) Yes type in my own password
 13634      g) Generate random password
 13635      n) No leave this optional password blank
 13636      y/g/n> y
 13637      Enter the password:
 13638      password:
 13639      Confirm the password:
 13640      password:
 13641      Remote config
 13642      --------------------
 13643      [remote]
 13644      type = mega
 13645      user = you@example.com
 13646      pass = *** ENCRYPTED ***
 13647      --------------------
 13648      y) Yes this is OK
 13649      e) Edit this remote
 13650      d) Delete this remote
 13651      y/e/d> y
 13652  
 13653  NOTE: The encryption keys need to have been already generated after a
 13654  regular login via the browser, otherwise attempting to use the
 13655  credentials in rclone will fail.
 13656  
 13657  Once configured you can then use rclone like this,
 13658  
 13659  List directories in top level of your Mega
 13660  
 13661      rclone lsd remote:
 13662  
 13663  List all the files in your Mega
 13664  
 13665      rclone ls remote:
 13666  
 13667  To copy a local directory to an Mega directory called backup
 13668  
 13669      rclone copy /home/source remote:backup
 13670  
 13671  Modified time and hashes
 13672  
 13673  Mega does not support modification times or hashes yet.
 13674  
 13675  Duplicated files
 13676  
 13677  Mega can have two files with exactly the same name and path (unlike a
 13678  normal file system).
 13679  
 13680  Duplicated files cause problems with the syncing and you will see
 13681  messages in the log about duplicates.
 13682  
 13683  Use rclone dedupe to fix duplicated files.
 13684  
 13685  Failure to log-in
 13686  
 13687  Mega remotes seem to get blocked (reject logins) under “heavy use”. We
 13688  haven’t worked out the exact blocking rules but it seems to be related
 13689  to fast paced, sucessive rclone commands.
 13690  
 13691  For example, executing this command 90 times in a row
 13692  rclone link remote:file will cause the remote to become “blocked”. This
 13693  is not an abnormal situation, for example if you wish to get the public
 13694  links of a directory with hundred of files… After more or less a week,
 13695  the remote will remote accept rclone logins normally again.
 13696  
 13697  You can mitigate this issue by mounting the remote it with rclone mount.
 13698  This will log-in when mounting and a log-out when unmounting only. You
 13699  can also run rclone rcd and then use rclone rc to run the commands over
 13700  the API to avoid logging in each time.
 13701  
 13702  Rclone does not currently close mega sessions (you can see them in the
 13703  web interface), however closing the sessions does not solve the issue.
 13704  
 13705  If you space rclone commands by 3 seconds it will avoid blocking the
 13706  remote. We haven’t identified the exact blocking rules, so perhaps one
 13707  could execute the command 80 times without waiting and avoid blocking by
 13708  waiting 3 seconds, then continuing…
 13709  
 13710  Note that this has been observed by trial and error and might not be set
 13711  in stone.
 13712  
 13713  Other tools seem not to produce this blocking effect, as they use a
 13714  different working approach (state-based, using sessionIDs instead of
 13715  log-in) which isn’t compatible with the current stateless rclone
 13716  approach.
 13717  
 13718  Note that once blocked, the use of other tools (such as megacmd) is not
 13719  a sure workaround: following megacmd login times have been observed in
 13720  sucession for blocked remote: 7 minutes, 20 min, 30min, 30 min, 30min.
 13721  Web access looks unaffected though.
 13722  
 13723  Investigation is continuing in relation to workarounds based on
 13724  timeouts, pacers, retrials and tpslimits - if you discover something
 13725  relevant, please post on the forum.
 13726  
 13727  So, if rclone was working nicely and suddenly you are unable to log-in
 13728  and you are sure the user and the password are correct, likely you have
 13729  got the remote blocked for a while.
 13730  
 13731  Standard Options
 13732  
 13733  Here are the standard options specific to mega (Mega).
 13734  
 13735  –mega-user
 13736  
 13737  User name
 13738  
 13739  -   Config: user
 13740  -   Env Var: RCLONE_MEGA_USER
 13741  -   Type: string
 13742  -   Default: ""
 13743  
 13744  –mega-pass
 13745  
 13746  Password.
 13747  
 13748  -   Config: pass
 13749  -   Env Var: RCLONE_MEGA_PASS
 13750  -   Type: string
 13751  -   Default: ""
 13752  
 13753  Advanced Options
 13754  
 13755  Here are the advanced options specific to mega (Mega).
 13756  
 13757  –mega-debug
 13758  
 13759  Output more debug from Mega.
 13760  
 13761  If this flag is set (along with -vv) it will print further debugging
 13762  information from the mega backend.
 13763  
 13764  -   Config: debug
 13765  -   Env Var: RCLONE_MEGA_DEBUG
 13766  -   Type: bool
 13767  -   Default: false
 13768  
 13769  –mega-hard-delete
 13770  
 13771  Delete files permanently rather than putting them into the trash.
 13772  
 13773  Normally the mega backend will put all deletions into the trash rather
 13774  than permanently deleting them. If you specify this then rclone will
 13775  permanently delete objects instead.
 13776  
 13777  -   Config: hard_delete
 13778  -   Env Var: RCLONE_MEGA_HARD_DELETE
 13779  -   Type: bool
 13780  -   Default: false
 13781  
 13782  Limitations
 13783  
 13784  This backend uses the go-mega go library which is an opensource go
 13785  library implementing the Mega API. There doesn’t appear to be any
 13786  documentation for the mega protocol beyond the mega C++ SDK source code
 13787  so there are likely quite a few errors still remaining in this library.
 13788  
 13789  Mega allows duplicate files which may confuse rclone.
 13790  
 13791  
 13792  Microsoft Azure Blob Storage
 13793  
 13794  Paths are specified as remote:container (or remote: for the lsd
 13795  command.) You may put subdirectories in too, eg
 13796  remote:container/path/to/dir.
 13797  
 13798  Here is an example of making a Microsoft Azure Blob Storage
 13799  configuration. For a remote called remote. First run:
 13800  
 13801       rclone config
 13802  
 13803  This will guide you through an interactive setup process:
 13804  
 13805      No remotes found - make a new one
 13806      n) New remote
 13807      s) Set configuration password
 13808      q) Quit config
 13809      n/s/q> n
 13810      name> remote
 13811      Type of storage to configure.
 13812      Choose a number from below, or type in your own value
 13813       1 / Amazon Drive
 13814         \ "amazon cloud drive"
 13815       2 / Amazon S3 (also Dreamhost, Ceph, Minio)
 13816         \ "s3"
 13817       3 / Backblaze B2
 13818         \ "b2"
 13819       4 / Box
 13820         \ "box"
 13821       5 / Dropbox
 13822         \ "dropbox"
 13823       6 / Encrypt/Decrypt a remote
 13824         \ "crypt"
 13825       7 / FTP Connection
 13826         \ "ftp"
 13827       8 / Google Cloud Storage (this is not Google Drive)
 13828         \ "google cloud storage"
 13829       9 / Google Drive
 13830         \ "drive"
 13831      10 / Hubic
 13832         \ "hubic"
 13833      11 / Local Disk
 13834         \ "local"
 13835      12 / Microsoft Azure Blob Storage
 13836         \ "azureblob"
 13837      13 / Microsoft OneDrive
 13838         \ "onedrive"
 13839      14 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
 13840         \ "swift"
 13841      15 / SSH/SFTP Connection
 13842         \ "sftp"
 13843      16 / Yandex Disk
 13844         \ "yandex"
 13845      17 / http Connection
 13846         \ "http"
 13847      Storage> azureblob
 13848      Storage Account Name
 13849      account> account_name
 13850      Storage Account Key
 13851      key> base64encodedkey==
 13852      Endpoint for the service - leave blank normally.
 13853      endpoint> 
 13854      Remote config
 13855      --------------------
 13856      [remote]
 13857      account = account_name
 13858      key = base64encodedkey==
 13859      endpoint = 
 13860      --------------------
 13861      y) Yes this is OK
 13862      e) Edit this remote
 13863      d) Delete this remote
 13864      y/e/d> y
 13865  
 13866  See all containers
 13867  
 13868      rclone lsd remote:
 13869  
 13870  Make a new container
 13871  
 13872      rclone mkdir remote:container
 13873  
 13874  List the contents of a container
 13875  
 13876      rclone ls remote:container
 13877  
 13878  Sync /home/local/directory to the remote container, deleting any excess
 13879  files in the container.
 13880  
 13881      rclone sync /home/local/directory remote:container
 13882  
 13883  –fast-list
 13884  
 13885  This remote supports --fast-list which allows you to use fewer
 13886  transactions in exchange for more memory. See the rclone docs for more
 13887  details.
 13888  
 13889  Modified time
 13890  
 13891  The modified time is stored as metadata on the object with the mtime
 13892  key. It is stored using RFC3339 Format time with nanosecond precision.
 13893  The metadata is supplied during directory listings so there is no
 13894  overhead to using it.
 13895  
 13896  Hashes
 13897  
 13898  MD5 hashes are stored with blobs. However blobs that were uploaded in
 13899  chunks only have an MD5 if the source remote was capable of MD5 hashes,
 13900  eg the local disk.
 13901  
 13902  Authenticating with Azure Blob Storage
 13903  
 13904  Rclone has 3 ways of authenticating with Azure Blob Storage:
 13905  
 13906  Account and Key
 13907  
 13908  This is the most straight forward and least flexible way. Just fill in
 13909  the account and key lines and leave the rest blank.
 13910  
 13911  SAS URL
 13912  
 13913  This can be an account level SAS URL or container level SAS URL
 13914  
 13915  To use it leave account, key blank and fill in sas_url.
 13916  
 13917  Account level SAS URL or container level SAS URL can be obtained from
 13918  Azure portal or Azure Storage Explorer. To get a container level SAS URL
 13919  right click on a container in the Azure Blob explorer in the Azure
 13920  portal.
 13921  
 13922  If You use container level SAS URL, rclone operations are permitted only
 13923  on particular container, eg
 13924  
 13925      rclone ls azureblob:container or rclone ls azureblob:
 13926  
 13927  Since container name already exists in SAS URL, you can leave it empty
 13928  as well.
 13929  
 13930  However these will not work
 13931  
 13932      rclone lsd azureblob:
 13933      rclone ls azureblob:othercontainer
 13934  
 13935  This would be useful for temporarily allowing third parties access to a
 13936  single container or putting credentials into an untrusted environment.
 13937  
 13938  Multipart uploads
 13939  
 13940  Rclone supports multipart uploads with Azure Blob storage. Files bigger
 13941  than 256MB will be uploaded using chunked upload by default.
 13942  
 13943  The files will be uploaded in parallel in 4MB chunks (by default). Note
 13944  that these chunks are buffered in memory and there may be up to
 13945  --transfers of them being uploaded at once.
 13946  
 13947  Files can’t be split into more than 50,000 chunks so by default, so the
 13948  largest file that can be uploaded with 4MB chunk size is 195GB. Above
 13949  this rclone will double the chunk size until it creates less than 50,000
 13950  chunks. By default this will mean a maximum file size of 3.2TB can be
 13951  uploaded. This can be raised to 5TB using --azureblob-chunk-size 100M.
 13952  
 13953  Note that rclone doesn’t commit the block list until the end of the
 13954  upload which means that there is a limit of 9.5TB of multipart uploads
 13955  in progress as Azure won’t allow more than that amount of uncommitted
 13956  blocks.
 13957  
 13958  Standard Options
 13959  
 13960  Here are the standard options specific to azureblob (Microsoft Azure
 13961  Blob Storage).
 13962  
 13963  –azureblob-account
 13964  
 13965  Storage Account Name (leave blank to use connection string or SAS URL)
 13966  
 13967  -   Config: account
 13968  -   Env Var: RCLONE_AZUREBLOB_ACCOUNT
 13969  -   Type: string
 13970  -   Default: ""
 13971  
 13972  –azureblob-key
 13973  
 13974  Storage Account Key (leave blank to use connection string or SAS URL)
 13975  
 13976  -   Config: key
 13977  -   Env Var: RCLONE_AZUREBLOB_KEY
 13978  -   Type: string
 13979  -   Default: ""
 13980  
 13981  –azureblob-sas-url
 13982  
 13983  SAS URL for container level access only (leave blank if using
 13984  account/key or connection string)
 13985  
 13986  -   Config: sas_url
 13987  -   Env Var: RCLONE_AZUREBLOB_SAS_URL
 13988  -   Type: string
 13989  -   Default: ""
 13990  
 13991  Advanced Options
 13992  
 13993  Here are the advanced options specific to azureblob (Microsoft Azure
 13994  Blob Storage).
 13995  
 13996  –azureblob-endpoint
 13997  
 13998  Endpoint for the service Leave blank normally.
 13999  
 14000  -   Config: endpoint
 14001  -   Env Var: RCLONE_AZUREBLOB_ENDPOINT
 14002  -   Type: string
 14003  -   Default: ""
 14004  
 14005  –azureblob-upload-cutoff
 14006  
 14007  Cutoff for switching to chunked upload (<= 256MB).
 14008  
 14009  -   Config: upload_cutoff
 14010  -   Env Var: RCLONE_AZUREBLOB_UPLOAD_CUTOFF
 14011  -   Type: SizeSuffix
 14012  -   Default: 256M
 14013  
 14014  –azureblob-chunk-size
 14015  
 14016  Upload chunk size (<= 100MB).
 14017  
 14018  Note that this is stored in memory and there may be up to “–transfers”
 14019  chunks stored at once in memory.
 14020  
 14021  -   Config: chunk_size
 14022  -   Env Var: RCLONE_AZUREBLOB_CHUNK_SIZE
 14023  -   Type: SizeSuffix
 14024  -   Default: 4M
 14025  
 14026  –azureblob-list-chunk
 14027  
 14028  Size of blob list.
 14029  
 14030  This sets the number of blobs requested in each listing chunk. Default
 14031  is the maximum, 5000. “List blobs” requests are permitted 2 minutes per
 14032  megabyte to complete. If an operation is taking longer than 2 minutes
 14033  per megabyte on average, it will time out ( source ). This can be used
 14034  to limit the number of blobs items to return, to avoid the time out.
 14035  
 14036  -   Config: list_chunk
 14037  -   Env Var: RCLONE_AZUREBLOB_LIST_CHUNK
 14038  -   Type: int
 14039  -   Default: 5000
 14040  
 14041  –azureblob-access-tier
 14042  
 14043  Access tier of blob: hot, cool or archive.
 14044  
 14045  Archived blobs can be restored by setting access tier to hot or cool.
 14046  Leave blank if you intend to use default access tier, which is set at
 14047  account level
 14048  
 14049  If there is no “access tier” specified, rclone doesn’t apply any tier.
 14050  rclone performs “Set Tier” operation on blobs while uploading, if
 14051  objects are not modified, specifying “access tier” to new one will have
 14052  no effect. If blobs are in “archive tier” at remote, trying to perform
 14053  data transfer operations from remote will not be allowed. User should
 14054  first restore by tiering blob to “Hot” or “Cool”.
 14055  
 14056  -   Config: access_tier
 14057  -   Env Var: RCLONE_AZUREBLOB_ACCESS_TIER
 14058  -   Type: string
 14059  -   Default: ""
 14060  
 14061  Limitations
 14062  
 14063  MD5 sums are only uploaded with chunked files if the source has an MD5
 14064  sum. This will always be the case for a local to azure copy.
 14065  
 14066  
 14067  Microsoft OneDrive
 14068  
 14069  Paths are specified as remote:path
 14070  
 14071  Paths may be as deep as required, eg remote:directory/subdirectory.
 14072  
 14073  The initial setup for OneDrive involves getting a token from Microsoft
 14074  which you need to do in your browser. rclone config walks you through
 14075  it.
 14076  
 14077  Here is an example of how to make a remote called remote. First run:
 14078  
 14079       rclone config
 14080  
 14081  This will guide you through an interactive setup process:
 14082  
 14083      e) Edit existing remote
 14084      n) New remote
 14085      d) Delete remote
 14086      r) Rename remote
 14087      c) Copy remote
 14088      s) Set configuration password
 14089      q) Quit config
 14090      e/n/d/r/c/s/q> n
 14091      name> remote
 14092      Type of storage to configure.
 14093      Enter a string value. Press Enter for the default ("").
 14094      Choose a number from below, or type in your own value
 14095      ...
 14096      18 / Microsoft OneDrive
 14097         \ "onedrive"
 14098      ...
 14099      Storage> 18
 14100      Microsoft App Client Id
 14101      Leave blank normally.
 14102      Enter a string value. Press Enter for the default ("").
 14103      client_id>
 14104      Microsoft App Client Secret
 14105      Leave blank normally.
 14106      Enter a string value. Press Enter for the default ("").
 14107      client_secret>
 14108      Edit advanced config? (y/n)
 14109      y) Yes
 14110      n) No
 14111      y/n> n
 14112      Remote config
 14113      Use auto config?
 14114       * Say Y if not sure
 14115       * Say N if you are working on a remote or headless machine
 14116      y) Yes
 14117      n) No
 14118      y/n> y
 14119      If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
 14120      Log in and authorize rclone for access
 14121      Waiting for code...
 14122      Got code
 14123      Choose a number from below, or type in an existing value
 14124       1 / OneDrive Personal or Business
 14125         \ "onedrive"
 14126       2 / Sharepoint site
 14127         \ "sharepoint"
 14128       3 / Type in driveID
 14129         \ "driveid"
 14130       4 / Type in SiteID
 14131         \ "siteid"
 14132       5 / Search a Sharepoint site
 14133         \ "search"
 14134      Your choice> 1
 14135      Found 1 drives, please select the one you want to use:
 14136      0: OneDrive (business) id=b!Eqwertyuiopasdfghjklzxcvbnm-7mnbvcxzlkjhgfdsapoiuytrewqk
 14137      Chose drive to use:> 0
 14138      Found drive 'root' of type 'business', URL: https://org-my.sharepoint.com/personal/you/Documents
 14139      Is that okay?
 14140      y) Yes
 14141      n) No
 14142      y/n> y
 14143      --------------------
 14144      [remote]
 14145      type = onedrive
 14146      token = {"access_token":"youraccesstoken","token_type":"Bearer","refresh_token":"yourrefreshtoken","expiry":"2018-08-26T22:39:52.486512262+08:00"}
 14147      drive_id = b!Eqwertyuiopasdfghjklzxcvbnm-7mnbvcxzlkjhgfdsapoiuytrewqk
 14148      drive_type = business
 14149      --------------------
 14150      y) Yes this is OK
 14151      e) Edit this remote
 14152      d) Delete this remote
 14153      y/e/d> y
 14154  
 14155  See the remote setup docs for how to set it up on a machine with no
 14156  Internet browser available.
 14157  
 14158  Note that rclone runs a webserver on your local machine to collect the
 14159  token as returned from Microsoft. This only runs from the moment it
 14160  opens your browser to the moment you get back the verification code.
 14161  This is on http://127.0.0.1:53682/ and this it may require you to
 14162  unblock it temporarily if you are running a host firewall.
 14163  
 14164  Once configured you can then use rclone like this,
 14165  
 14166  List directories in top level of your OneDrive
 14167  
 14168      rclone lsd remote:
 14169  
 14170  List all the files in your OneDrive
 14171  
 14172      rclone ls remote:
 14173  
 14174  To copy a local directory to an OneDrive directory called backup
 14175  
 14176      rclone copy /home/source remote:backup
 14177  
 14178  Getting your own Client ID and Key
 14179  
 14180  rclone uses a pair of Client ID and Key shared by all rclone users when
 14181  performing requests by default. If you are having problems with them
 14182  (E.g., seeing a lot of throttling), you can get your own Client ID and
 14183  Key by following the steps below:
 14184  
 14185  1.  Open https://apps.dev.microsoft.com/#/appList, then click Add an app
 14186      (Choose Converged applications if applicable)
 14187  2.  Enter a name for your app, and click continue. Copy and keep the
 14188      Application Id under the app name for later use.
 14189  3.  Under section Application Secrets, click Generate New Password. Copy
 14190      and keep that password for later use.
 14191  4.  Under section Platforms, click Add platform, then Web. Enter
 14192      http://localhost:53682/ in Redirect URLs.
 14193  5.  Under section Microsoft Graph Permissions, Add these
 14194      delegated permissions: Files.Read, Files.ReadWrite, Files.Read.All,
 14195      Files.ReadWrite.All, offline_access, User.Read.
 14196  6.  Scroll to the bottom and click Save.
 14197  
 14198  Now the application is complete. Run rclone config to create or edit a
 14199  OneDrive remote. Supply the app ID and password as Client ID and Secret,
 14200  respectively. rclone will walk you through the remaining steps.
 14201  
 14202  Modified time and hashes
 14203  
 14204  OneDrive allows modification times to be set on objects accurate to 1
 14205  second. These will be used to detect whether objects need syncing or
 14206  not.
 14207  
 14208  OneDrive personal supports SHA1 type hashes. OneDrive for business and
 14209  Sharepoint Server support QuickXorHash.
 14210  
 14211  For all types of OneDrive you can use the --checksum flag.
 14212  
 14213  Deleting files
 14214  
 14215  Any files you delete with rclone will end up in the trash. Microsoft
 14216  doesn’t provide an API to permanently delete files, nor to empty the
 14217  trash, so you will have to do that with one of Microsoft’s apps or via
 14218  the OneDrive website.
 14219  
 14220  Standard Options
 14221  
 14222  Here are the standard options specific to onedrive (Microsoft OneDrive).
 14223  
 14224  –onedrive-client-id
 14225  
 14226  Microsoft App Client Id Leave blank normally.
 14227  
 14228  -   Config: client_id
 14229  -   Env Var: RCLONE_ONEDRIVE_CLIENT_ID
 14230  -   Type: string
 14231  -   Default: ""
 14232  
 14233  –onedrive-client-secret
 14234  
 14235  Microsoft App Client Secret Leave blank normally.
 14236  
 14237  -   Config: client_secret
 14238  -   Env Var: RCLONE_ONEDRIVE_CLIENT_SECRET
 14239  -   Type: string
 14240  -   Default: ""
 14241  
 14242  Advanced Options
 14243  
 14244  Here are the advanced options specific to onedrive (Microsoft OneDrive).
 14245  
 14246  –onedrive-chunk-size
 14247  
 14248  Chunk size to upload files with - must be multiple of 320k.
 14249  
 14250  Above this size files will be chunked - must be multiple of 320k. Note
 14251  that the chunks will be buffered into memory.
 14252  
 14253  -   Config: chunk_size
 14254  -   Env Var: RCLONE_ONEDRIVE_CHUNK_SIZE
 14255  -   Type: SizeSuffix
 14256  -   Default: 10M
 14257  
 14258  –onedrive-drive-id
 14259  
 14260  The ID of the drive to use
 14261  
 14262  -   Config: drive_id
 14263  -   Env Var: RCLONE_ONEDRIVE_DRIVE_ID
 14264  -   Type: string
 14265  -   Default: ""
 14266  
 14267  –onedrive-drive-type
 14268  
 14269  The type of the drive ( personal | business | documentLibrary )
 14270  
 14271  -   Config: drive_type
 14272  -   Env Var: RCLONE_ONEDRIVE_DRIVE_TYPE
 14273  -   Type: string
 14274  -   Default: ""
 14275  
 14276  –onedrive-expose-onenote-files
 14277  
 14278  Set to make OneNote files show up in directory listings.
 14279  
 14280  By default rclone will hide OneNote files in directory listings because
 14281  operations like “Open” and “Update” won’t work on them. But this
 14282  behaviour may also prevent you from deleting them. If you want to delete
 14283  OneNote files or otherwise want them to show up in directory listing,
 14284  set this option.
 14285  
 14286  -   Config: expose_onenote_files
 14287  -   Env Var: RCLONE_ONEDRIVE_EXPOSE_ONENOTE_FILES
 14288  -   Type: bool
 14289  -   Default: false
 14290  
 14291  Limitations
 14292  
 14293  Note that OneDrive is case insensitive so you can’t have a file called
 14294  “Hello.doc” and one called “hello.doc”.
 14295  
 14296  There are quite a few characters that can’t be in OneDrive file names.
 14297  These can’t occur on Windows platforms, but on non-Windows platforms
 14298  they are common. Rclone will map these names to and from an identical
 14299  looking unicode equivalent. For example if a file has a ? in it will be
 14300  mapped to ? instead.
 14301  
 14302  The largest allowed file sizes are 15GB for OneDrive for Business and
 14303  35GB for OneDrive Personal (Updated 4 Jan 2019).
 14304  
 14305  The entire path, including the file name, must contain fewer than 400
 14306  characters for OneDrive, OneDrive for Business and SharePoint Online. If
 14307  you are encrypting file and folder names with rclone, you may want to
 14308  pay attention to this limitation because the encrypted names are
 14309  typically longer than the original ones.
 14310  
 14311  OneDrive seems to be OK with at least 50,000 files in a folder, but at
 14312  100,000 rclone will get errors listing the directory like
 14313  couldn’t list files: UnknownError:. See #2707 for more info.
 14314  
 14315  An official document about the limitations for different types of
 14316  OneDrive can be found here.
 14317  
 14318  Versioning issue
 14319  
 14320  Every change in OneDrive causes the service to create a new version.
 14321  This counts against a users quota. For example changing the modification
 14322  time of a file creates a second version, so the file is using twice the
 14323  space.
 14324  
 14325  The copy is the only rclone command affected by this as we copy the file
 14326  and then afterwards set the modification time to match the source file.
 14327  
 14328  NOTE: Starting October 2018, users will no longer be able to disable
 14329  versioning by default. This is because Microsoft has brought an update
 14330  to the mechanism. To change this new default setting, a PowerShell
 14331  command is required to be run by a SharePoint admin. If you are an
 14332  admin, you can run these commands in PowerShell to change that setting:
 14333  
 14334  1.  Install-Module -Name Microsoft.Online.SharePoint.PowerShell (in case
 14335      you haven’t installed this already)
 14336  2.  Import-Module Microsoft.Online.SharePoint.PowerShell -DisableNameChecking
 14337  3.  Connect-SPOService -Url https://YOURSITE-admin.sharepoint.com -Credential YOU@YOURSITE.COM
 14338      (replacing YOURSITE, YOU, YOURSITE.COM with the actual values; this
 14339      will prompt for your credentials)
 14340  4.  Set-SPOTenant -EnableMinimumVersionRequirement $False
 14341  5.  Disconnect-SPOService (to disconnect from the server)
 14342  
 14343  _Below are the steps for normal users to disable versioning. If you
 14344  don’t see the “No Versioning” option, make sure the above requirements
 14345  are met._
 14346  
 14347  User Weropol has found a method to disable versioning on OneDrive
 14348  
 14349  1.  Open the settings menu by clicking on the gear symbol at the top of
 14350      the OneDrive Business page.
 14351  2.  Click Site settings.
 14352  3.  Once on the Site settings page, navigate to Site Administration >
 14353      Site libraries and lists.
 14354  4.  Click Customize “Documents”.
 14355  5.  Click General Settings > Versioning Settings.
 14356  6.  Under Document Version History select the option No versioning.
 14357      Note: This will disable the creation of new file versions, but will
 14358      not remove any previous versions. Your documents are safe.
 14359  7.  Apply the changes by clicking OK.
 14360  8.  Use rclone to upload or modify files. (I also use the
 14361      –no-update-modtime flag)
 14362  9.  Restore the versioning settings after using rclone. (Optional)
 14363  
 14364  Troubleshooting
 14365  
 14366      Error: access_denied
 14367      Code: AADSTS65005
 14368      Description: Using application 'rclone' is currently not supported for your organization [YOUR_ORGANIZATION] because it is in an unmanaged state. An administrator needs to claim ownership of the company by DNS validation of [YOUR_ORGANIZATION] before the application rclone can be provisioned.
 14369  
 14370  This means that rclone can’t use the OneDrive for Business API with your
 14371  account. You can’t do much about it, maybe write an email to your
 14372  admins.
 14373  
 14374  However, there are other ways to interact with your OneDrive account.
 14375  Have a look at the webdav backend: https://rclone.org/webdav/#sharepoint
 14376  
 14377      Error: invalid_grant
 14378      Code: AADSTS50076
 14379      Description: Due to a configuration change made by your administrator, or because you moved to a new location, you must use multi-factor authentication to access '...'.
 14380  
 14381  If you see the error above after enabling multi-factor authentication
 14382  for your account, you can fix it by refreshing your OAuth refresh token.
 14383  To do that, run rclone config, and choose to edit your OneDrive backend.
 14384  Then, you don’t need to actually make any changes until you reach this
 14385  question: Already have a token - refresh?. For this question, answer y
 14386  and go through the process to refresh your token, just like the first
 14387  time the backend is configured. After this, rclone should work again for
 14388  this backend.
 14389  
 14390  
 14391  OpenDrive
 14392  
 14393  Paths are specified as remote:path
 14394  
 14395  Paths may be as deep as required, eg remote:directory/subdirectory.
 14396  
 14397  Here is an example of how to make a remote called remote. First run:
 14398  
 14399       rclone config
 14400  
 14401  This will guide you through an interactive setup process:
 14402  
 14403      n) New remote
 14404      d) Delete remote
 14405      q) Quit config
 14406      e/n/d/q> n
 14407      name> remote
 14408      Type of storage to configure.
 14409      Choose a number from below, or type in your own value
 14410       1 / Amazon Drive
 14411         \ "amazon cloud drive"
 14412       2 / Amazon S3 (also Dreamhost, Ceph, Minio)
 14413         \ "s3"
 14414       3 / Backblaze B2
 14415         \ "b2"
 14416       4 / Dropbox
 14417         \ "dropbox"
 14418       5 / Encrypt/Decrypt a remote
 14419         \ "crypt"
 14420       6 / Google Cloud Storage (this is not Google Drive)
 14421         \ "google cloud storage"
 14422       7 / Google Drive
 14423         \ "drive"
 14424       8 / Hubic
 14425         \ "hubic"
 14426       9 / Local Disk
 14427         \ "local"
 14428      10 / OpenDrive
 14429         \ "opendrive"
 14430      11 / Microsoft OneDrive
 14431         \ "onedrive"
 14432      12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
 14433         \ "swift"
 14434      13 / SSH/SFTP Connection
 14435         \ "sftp"
 14436      14 / Yandex Disk
 14437         \ "yandex"
 14438      Storage> 10
 14439      Username
 14440      username>
 14441      Password
 14442      y) Yes type in my own password
 14443      g) Generate random password
 14444      y/g> y
 14445      Enter the password:
 14446      password:
 14447      Confirm the password:
 14448      password:
 14449      --------------------
 14450      [remote]
 14451      username =
 14452      password = *** ENCRYPTED ***
 14453      --------------------
 14454      y) Yes this is OK
 14455      e) Edit this remote
 14456      d) Delete this remote
 14457      y/e/d> y
 14458  
 14459  List directories in top level of your OpenDrive
 14460  
 14461      rclone lsd remote:
 14462  
 14463  List all the files in your OpenDrive
 14464  
 14465      rclone ls remote:
 14466  
 14467  To copy a local directory to an OpenDrive directory called backup
 14468  
 14469      rclone copy /home/source remote:backup
 14470  
 14471  Modified time and MD5SUMs
 14472  
 14473  OpenDrive allows modification times to be set on objects accurate to 1
 14474  second. These will be used to detect whether objects need syncing or
 14475  not.
 14476  
 14477  Standard Options
 14478  
 14479  Here are the standard options specific to opendrive (OpenDrive).
 14480  
 14481  –opendrive-username
 14482  
 14483  Username
 14484  
 14485  -   Config: username
 14486  -   Env Var: RCLONE_OPENDRIVE_USERNAME
 14487  -   Type: string
 14488  -   Default: ""
 14489  
 14490  –opendrive-password
 14491  
 14492  Password.
 14493  
 14494  -   Config: password
 14495  -   Env Var: RCLONE_OPENDRIVE_PASSWORD
 14496  -   Type: string
 14497  -   Default: ""
 14498  
 14499  Limitations
 14500  
 14501  Note that OpenDrive is case insensitive so you can’t have a file called
 14502  “Hello.doc” and one called “hello.doc”.
 14503  
 14504  There are quite a few characters that can’t be in OpenDrive file names.
 14505  These can’t occur on Windows platforms, but on non-Windows platforms
 14506  they are common. Rclone will map these names to and from an identical
 14507  looking unicode equivalent. For example if a file has a ? in it will be
 14508  mapped to ? instead.
 14509  
 14510  
 14511  QingStor
 14512  
 14513  Paths are specified as remote:bucket (or remote: for the lsd command.)
 14514  You may put subdirectories in too, eg remote:bucket/path/to/dir.
 14515  
 14516  Here is an example of making an QingStor configuration. First run
 14517  
 14518      rclone config
 14519  
 14520  This will guide you through an interactive setup process.
 14521  
 14522      No remotes found - make a new one
 14523      n) New remote
 14524      r) Rename remote
 14525      c) Copy remote
 14526      s) Set configuration password
 14527      q) Quit config
 14528      n/r/c/s/q> n
 14529      name> remote
 14530      Type of storage to configure.
 14531      Choose a number from below, or type in your own value
 14532       1 / Amazon Drive
 14533         \ "amazon cloud drive"
 14534       2 / Amazon S3 (also Dreamhost, Ceph, Minio)
 14535         \ "s3"
 14536       3 / Backblaze B2
 14537         \ "b2"
 14538       4 / Dropbox
 14539         \ "dropbox"
 14540       5 / Encrypt/Decrypt a remote
 14541         \ "crypt"
 14542       6 / FTP Connection
 14543         \ "ftp"
 14544       7 / Google Cloud Storage (this is not Google Drive)
 14545         \ "google cloud storage"
 14546       8 / Google Drive
 14547         \ "drive"
 14548       9 / Hubic
 14549         \ "hubic"
 14550      10 / Local Disk
 14551         \ "local"
 14552      11 / Microsoft OneDrive
 14553         \ "onedrive"
 14554      12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
 14555         \ "swift"
 14556      13 / QingStor Object Storage
 14557         \ "qingstor"
 14558      14 / SSH/SFTP Connection
 14559         \ "sftp"
 14560      15 / Yandex Disk
 14561         \ "yandex"
 14562      Storage> 13
 14563      Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
 14564      Choose a number from below, or type in your own value
 14565       1 / Enter QingStor credentials in the next step
 14566         \ "false"
 14567       2 / Get QingStor credentials from the environment (env vars or IAM)
 14568         \ "true"
 14569      env_auth> 1
 14570      QingStor Access Key ID - leave blank for anonymous access or runtime credentials.
 14571      access_key_id> access_key
 14572      QingStor Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
 14573      secret_access_key> secret_key
 14574      Enter a endpoint URL to connection QingStor API.
 14575      Leave blank will use the default value "https://qingstor.com:443"
 14576      endpoint>
 14577      Zone connect to. Default is "pek3a".
 14578      Choose a number from below, or type in your own value
 14579         / The Beijing (China) Three Zone
 14580       1 | Needs location constraint pek3a.
 14581         \ "pek3a"
 14582         / The Shanghai (China) First Zone
 14583       2 | Needs location constraint sh1a.
 14584         \ "sh1a"
 14585      zone> 1
 14586      Number of connnection retry.
 14587      Leave blank will use the default value "3".
 14588      connection_retries>
 14589      Remote config
 14590      --------------------
 14591      [remote]
 14592      env_auth = false
 14593      access_key_id = access_key
 14594      secret_access_key = secret_key
 14595      endpoint =
 14596      zone = pek3a
 14597      connection_retries =
 14598      --------------------
 14599      y) Yes this is OK
 14600      e) Edit this remote
 14601      d) Delete this remote
 14602      y/e/d> y
 14603  
 14604  This remote is called remote and can now be used like this
 14605  
 14606  See all buckets
 14607  
 14608      rclone lsd remote:
 14609  
 14610  Make a new bucket
 14611  
 14612      rclone mkdir remote:bucket
 14613  
 14614  List the contents of a bucket
 14615  
 14616      rclone ls remote:bucket
 14617  
 14618  Sync /home/local/directory to the remote bucket, deleting any excess
 14619  files in the bucket.
 14620  
 14621      rclone sync /home/local/directory remote:bucket
 14622  
 14623  –fast-list
 14624  
 14625  This remote supports --fast-list which allows you to use fewer
 14626  transactions in exchange for more memory. See the rclone docs for more
 14627  details.
 14628  
 14629  Multipart uploads
 14630  
 14631  rclone supports multipart uploads with QingStor which means that it can
 14632  upload files bigger than 5GB. Note that files uploaded with multipart
 14633  upload don’t have an MD5SUM.
 14634  
 14635  Buckets and Zone
 14636  
 14637  With QingStor you can list buckets (rclone lsd) using any zone, but you
 14638  can only access the content of a bucket from the zone it was created in.
 14639  If you attempt to access a bucket from the wrong zone, you will get an
 14640  error, incorrect zone, the bucket is not in 'XXX' zone.
 14641  
 14642  Authentication
 14643  
 14644  There are two ways to supply rclone with a set of QingStor credentials.
 14645  In order of precedence:
 14646  
 14647  -   Directly in the rclone configuration file (as configured by
 14648      rclone config)
 14649      -   set access_key_id and secret_access_key
 14650  -   Runtime configuration:
 14651      -   set env_auth to true in the config file
 14652      -   Exporting the following environment variables before running
 14653          rclone
 14654          -   Access Key ID: QS_ACCESS_KEY_ID or QS_ACCESS_KEY
 14655          -   Secret Access Key: QS_SECRET_ACCESS_KEY or QS_SECRET_KEY
 14656  
 14657  Standard Options
 14658  
 14659  Here are the standard options specific to qingstor (QingCloud Object
 14660  Storage).
 14661  
 14662  –qingstor-env-auth
 14663  
 14664  Get QingStor credentials from runtime. Only applies if access_key_id and
 14665  secret_access_key is blank.
 14666  
 14667  -   Config: env_auth
 14668  -   Env Var: RCLONE_QINGSTOR_ENV_AUTH
 14669  -   Type: bool
 14670  -   Default: false
 14671  -   Examples:
 14672      -   “false”
 14673          -   Enter QingStor credentials in the next step
 14674      -   “true”
 14675          -   Get QingStor credentials from the environment (env vars or
 14676              IAM)
 14677  
 14678  –qingstor-access-key-id
 14679  
 14680  QingStor Access Key ID Leave blank for anonymous access or runtime
 14681  credentials.
 14682  
 14683  -   Config: access_key_id
 14684  -   Env Var: RCLONE_QINGSTOR_ACCESS_KEY_ID
 14685  -   Type: string
 14686  -   Default: ""
 14687  
 14688  –qingstor-secret-access-key
 14689  
 14690  QingStor Secret Access Key (password) Leave blank for anonymous access
 14691  or runtime credentials.
 14692  
 14693  -   Config: secret_access_key
 14694  -   Env Var: RCLONE_QINGSTOR_SECRET_ACCESS_KEY
 14695  -   Type: string
 14696  -   Default: ""
 14697  
 14698  –qingstor-endpoint
 14699  
 14700  Enter a endpoint URL to connection QingStor API. Leave blank will use
 14701  the default value “https://qingstor.com:443”
 14702  
 14703  -   Config: endpoint
 14704  -   Env Var: RCLONE_QINGSTOR_ENDPOINT
 14705  -   Type: string
 14706  -   Default: ""
 14707  
 14708  –qingstor-zone
 14709  
 14710  Zone to connect to. Default is “pek3a”.
 14711  
 14712  -   Config: zone
 14713  -   Env Var: RCLONE_QINGSTOR_ZONE
 14714  -   Type: string
 14715  -   Default: ""
 14716  -   Examples:
 14717      -   “pek3a”
 14718          -   The Beijing (China) Three Zone
 14719          -   Needs location constraint pek3a.
 14720      -   “sh1a”
 14721          -   The Shanghai (China) First Zone
 14722          -   Needs location constraint sh1a.
 14723      -   “gd2a”
 14724          -   The Guangdong (China) Second Zone
 14725          -   Needs location constraint gd2a.
 14726  
 14727  Advanced Options
 14728  
 14729  Here are the advanced options specific to qingstor (QingCloud Object
 14730  Storage).
 14731  
 14732  –qingstor-connection-retries
 14733  
 14734  Number of connection retries.
 14735  
 14736  -   Config: connection_retries
 14737  -   Env Var: RCLONE_QINGSTOR_CONNECTION_RETRIES
 14738  -   Type: int
 14739  -   Default: 3
 14740  
 14741  –qingstor-upload-cutoff
 14742  
 14743  Cutoff for switching to chunked upload
 14744  
 14745  Any files larger than this will be uploaded in chunks of chunk_size. The
 14746  minimum is 0 and the maximum is 5GB.
 14747  
 14748  -   Config: upload_cutoff
 14749  -   Env Var: RCLONE_QINGSTOR_UPLOAD_CUTOFF
 14750  -   Type: SizeSuffix
 14751  -   Default: 200M
 14752  
 14753  –qingstor-chunk-size
 14754  
 14755  Chunk size to use for uploading.
 14756  
 14757  When uploading files larger than upload_cutoff they will be uploaded as
 14758  multipart uploads using this chunk size.
 14759  
 14760  Note that “–qingstor-upload-concurrency” chunks of this size are
 14761  buffered in memory per transfer.
 14762  
 14763  If you are transferring large files over high speed links and you have
 14764  enough memory, then increasing this will speed up the transfers.
 14765  
 14766  -   Config: chunk_size
 14767  -   Env Var: RCLONE_QINGSTOR_CHUNK_SIZE
 14768  -   Type: SizeSuffix
 14769  -   Default: 4M
 14770  
 14771  –qingstor-upload-concurrency
 14772  
 14773  Concurrency for multipart uploads.
 14774  
 14775  This is the number of chunks of the same file that are uploaded
 14776  concurrently.
 14777  
 14778  NB if you set this to > 1 then the checksums of multpart uploads become
 14779  corrupted (the uploads themselves are not corrupted though).
 14780  
 14781  If you are uploading small numbers of large file over high speed link
 14782  and these uploads do not fully utilize your bandwidth, then increasing
 14783  this may help to speed up the transfers.
 14784  
 14785  -   Config: upload_concurrency
 14786  -   Env Var: RCLONE_QINGSTOR_UPLOAD_CONCURRENCY
 14787  -   Type: int
 14788  -   Default: 1
 14789  
 14790  
 14791  Swift
 14792  
 14793  Swift refers to Openstack Object Storage. Commercial implementations of
 14794  that being:
 14795  
 14796  -   Rackspace Cloud Files
 14797  -   Memset Memstore
 14798  -   OVH Object Storage
 14799  -   Oracle Cloud Storage
 14800  -   IBM Bluemix Cloud ObjectStorage Swift
 14801  
 14802  Paths are specified as remote:container (or remote: for the lsd
 14803  command.) You may put subdirectories in too, eg
 14804  remote:container/path/to/dir.
 14805  
 14806  Here is an example of making a swift configuration. First run
 14807  
 14808      rclone config
 14809  
 14810  This will guide you through an interactive setup process.
 14811  
 14812      No remotes found - make a new one
 14813      n) New remote
 14814      s) Set configuration password
 14815      q) Quit config
 14816      n/s/q> n
 14817      name> remote
 14818      Type of storage to configure.
 14819      Choose a number from below, or type in your own value
 14820       1 / Amazon Drive
 14821         \ "amazon cloud drive"
 14822       2 / Amazon S3 (also Dreamhost, Ceph, Minio)
 14823         \ "s3"
 14824       3 / Backblaze B2
 14825         \ "b2"
 14826       4 / Box
 14827         \ "box"
 14828       5 / Cache a remote
 14829         \ "cache"
 14830       6 / Dropbox
 14831         \ "dropbox"
 14832       7 / Encrypt/Decrypt a remote
 14833         \ "crypt"
 14834       8 / FTP Connection
 14835         \ "ftp"
 14836       9 / Google Cloud Storage (this is not Google Drive)
 14837         \ "google cloud storage"
 14838      10 / Google Drive
 14839         \ "drive"
 14840      11 / Hubic
 14841         \ "hubic"
 14842      12 / Local Disk
 14843         \ "local"
 14844      13 / Microsoft Azure Blob Storage
 14845         \ "azureblob"
 14846      14 / Microsoft OneDrive
 14847         \ "onedrive"
 14848      15 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
 14849         \ "swift"
 14850      16 / Pcloud
 14851         \ "pcloud"
 14852      17 / QingCloud Object Storage
 14853         \ "qingstor"
 14854      18 / SSH/SFTP Connection
 14855         \ "sftp"
 14856      19 / Webdav
 14857         \ "webdav"
 14858      20 / Yandex Disk
 14859         \ "yandex"
 14860      21 / http Connection
 14861         \ "http"
 14862      Storage> swift
 14863      Get swift credentials from environment variables in standard OpenStack form.
 14864      Choose a number from below, or type in your own value
 14865       1 / Enter swift credentials in the next step
 14866         \ "false"
 14867       2 / Get swift credentials from environment vars. Leave other fields blank if using this.
 14868         \ "true"
 14869      env_auth> true
 14870      User name to log in (OS_USERNAME).
 14871      user> 
 14872      API key or password (OS_PASSWORD).
 14873      key> 
 14874      Authentication URL for server (OS_AUTH_URL).
 14875      Choose a number from below, or type in your own value
 14876       1 / Rackspace US
 14877         \ "https://auth.api.rackspacecloud.com/v1.0"
 14878       2 / Rackspace UK
 14879         \ "https://lon.auth.api.rackspacecloud.com/v1.0"
 14880       3 / Rackspace v2
 14881         \ "https://identity.api.rackspacecloud.com/v2.0"
 14882       4 / Memset Memstore UK
 14883         \ "https://auth.storage.memset.com/v1.0"
 14884       5 / Memset Memstore UK v2
 14885         \ "https://auth.storage.memset.com/v2.0"
 14886       6 / OVH
 14887         \ "https://auth.cloud.ovh.net/v2.0"
 14888      auth> 
 14889      User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
 14890      user_id> 
 14891      User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
 14892      domain> 
 14893      Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
 14894      tenant> 
 14895      Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
 14896      tenant_id> 
 14897      Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
 14898      tenant_domain> 
 14899      Region name - optional (OS_REGION_NAME)
 14900      region> 
 14901      Storage URL - optional (OS_STORAGE_URL)
 14902      storage_url> 
 14903      Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
 14904      auth_token> 
 14905      AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
 14906      auth_version> 
 14907      Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE)
 14908      Choose a number from below, or type in your own value
 14909       1 / Public (default, choose this if not sure)
 14910         \ "public"
 14911       2 / Internal (use internal service net)
 14912         \ "internal"
 14913       3 / Admin
 14914         \ "admin"
 14915      endpoint_type> 
 14916      Remote config
 14917      --------------------
 14918      [test]
 14919      env_auth = true
 14920      user = 
 14921      key = 
 14922      auth = 
 14923      user_id = 
 14924      domain = 
 14925      tenant = 
 14926      tenant_id = 
 14927      tenant_domain = 
 14928      region = 
 14929      storage_url = 
 14930      auth_token = 
 14931      auth_version = 
 14932      endpoint_type = 
 14933      --------------------
 14934      y) Yes this is OK
 14935      e) Edit this remote
 14936      d) Delete this remote
 14937      y/e/d> y
 14938  
 14939  This remote is called remote and can now be used like this
 14940  
 14941  See all containers
 14942  
 14943      rclone lsd remote:
 14944  
 14945  Make a new container
 14946  
 14947      rclone mkdir remote:container
 14948  
 14949  List the contents of a container
 14950  
 14951      rclone ls remote:container
 14952  
 14953  Sync /home/local/directory to the remote container, deleting any excess
 14954  files in the container.
 14955  
 14956      rclone sync /home/local/directory remote:container
 14957  
 14958  Configuration from an OpenStack credentials file
 14959  
 14960  An OpenStack credentials file typically looks something something like
 14961  this (without the comments)
 14962  
 14963      export OS_AUTH_URL=https://a.provider.net/v2.0
 14964      export OS_TENANT_ID=ffffffffffffffffffffffffffffffff
 14965      export OS_TENANT_NAME="1234567890123456"
 14966      export OS_USERNAME="123abc567xy"
 14967      echo "Please enter your OpenStack Password: "
 14968      read -sr OS_PASSWORD_INPUT
 14969      export OS_PASSWORD=$OS_PASSWORD_INPUT
 14970      export OS_REGION_NAME="SBG1"
 14971      if [ -z "$OS_REGION_NAME" ]; then unset OS_REGION_NAME; fi
 14972  
 14973  The config file needs to look something like this where $OS_USERNAME
 14974  represents the value of the OS_USERNAME variable - 123abc567xy in the
 14975  example above.
 14976  
 14977      [remote]
 14978      type = swift
 14979      user = $OS_USERNAME
 14980      key = $OS_PASSWORD
 14981      auth = $OS_AUTH_URL
 14982      tenant = $OS_TENANT_NAME
 14983  
 14984  Note that you may (or may not) need to set region too - try without
 14985  first.
 14986  
 14987  Configuration from the environment
 14988  
 14989  If you prefer you can configure rclone to use swift using a standard set
 14990  of OpenStack environment variables.
 14991  
 14992  When you run through the config, make sure you choose true for env_auth
 14993  and leave everything else blank.
 14994  
 14995  rclone will then set any empty config parameters from the environment
 14996  using standard OpenStack environment variables. There is a list of the
 14997  variables in the docs for the swift library.
 14998  
 14999  Using an alternate authentication method
 15000  
 15001  If your OpenStack installation uses a non-standard authentication method
 15002  that might not be yet supported by rclone or the underlying swift
 15003  library, you can authenticate externally (e.g. calling manually the
 15004  openstack commands to get a token). Then, you just need to pass the two
 15005  configuration variables auth_token and storage_url. If they are both
 15006  provided, the other variables are ignored. rclone will not try to
 15007  authenticate but instead assume it is already authenticated and use
 15008  these two variables to access the OpenStack installation.
 15009  
 15010  Using rclone without a config file
 15011  
 15012  You can use rclone with swift without a config file, if desired, like
 15013  this:
 15014  
 15015      source openstack-credentials-file
 15016      export RCLONE_CONFIG_MYREMOTE_TYPE=swift
 15017      export RCLONE_CONFIG_MYREMOTE_ENV_AUTH=true
 15018      rclone lsd myremote:
 15019  
 15020  –fast-list
 15021  
 15022  This remote supports --fast-list which allows you to use fewer
 15023  transactions in exchange for more memory. See the rclone docs for more
 15024  details.
 15025  
 15026  –update and –use-server-modtime
 15027  
 15028  As noted below, the modified time is stored on metadata on the object.
 15029  It is used by default for all operations that require checking the time
 15030  a file was last updated. It allows rclone to treat the remote more like
 15031  a true filesystem, but it is inefficient because it requires an extra
 15032  API call to retrieve the metadata.
 15033  
 15034  For many operations, the time the object was last uploaded to the remote
 15035  is sufficient to determine if it is “dirty”. By using --update along
 15036  with --use-server-modtime, you can avoid the extra API call and simply
 15037  upload files whose local modtime is newer than the time it was last
 15038  uploaded.
 15039  
 15040  Standard Options
 15041  
 15042  Here are the standard options specific to swift (Openstack Swift
 15043  (Rackspace Cloud Files, Memset Memstore, OVH)).
 15044  
 15045  –swift-env-auth
 15046  
 15047  Get swift credentials from environment variables in standard OpenStack
 15048  form.
 15049  
 15050  -   Config: env_auth
 15051  -   Env Var: RCLONE_SWIFT_ENV_AUTH
 15052  -   Type: bool
 15053  -   Default: false
 15054  -   Examples:
 15055      -   “false”
 15056          -   Enter swift credentials in the next step
 15057      -   “true”
 15058          -   Get swift credentials from environment vars. Leave other
 15059              fields blank if using this.
 15060  
 15061  –swift-user
 15062  
 15063  User name to log in (OS_USERNAME).
 15064  
 15065  -   Config: user
 15066  -   Env Var: RCLONE_SWIFT_USER
 15067  -   Type: string
 15068  -   Default: ""
 15069  
 15070  –swift-key
 15071  
 15072  API key or password (OS_PASSWORD).
 15073  
 15074  -   Config: key
 15075  -   Env Var: RCLONE_SWIFT_KEY
 15076  -   Type: string
 15077  -   Default: ""
 15078  
 15079  –swift-auth
 15080  
 15081  Authentication URL for server (OS_AUTH_URL).
 15082  
 15083  -   Config: auth
 15084  -   Env Var: RCLONE_SWIFT_AUTH
 15085  -   Type: string
 15086  -   Default: ""
 15087  -   Examples:
 15088      -   “https://auth.api.rackspacecloud.com/v1.0”
 15089          -   Rackspace US
 15090      -   “https://lon.auth.api.rackspacecloud.com/v1.0”
 15091          -   Rackspace UK
 15092      -   “https://identity.api.rackspacecloud.com/v2.0”
 15093          -   Rackspace v2
 15094      -   “https://auth.storage.memset.com/v1.0”
 15095          -   Memset Memstore UK
 15096      -   “https://auth.storage.memset.com/v2.0”
 15097          -   Memset Memstore UK v2
 15098      -   “https://auth.cloud.ovh.net/v2.0”
 15099          -   OVH
 15100  
 15101  –swift-user-id
 15102  
 15103  User ID to log in - optional - most swift systems use user and leave
 15104  this blank (v3 auth) (OS_USER_ID).
 15105  
 15106  -   Config: user_id
 15107  -   Env Var: RCLONE_SWIFT_USER_ID
 15108  -   Type: string
 15109  -   Default: ""
 15110  
 15111  –swift-domain
 15112  
 15113  User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
 15114  
 15115  -   Config: domain
 15116  -   Env Var: RCLONE_SWIFT_DOMAIN
 15117  -   Type: string
 15118  -   Default: ""
 15119  
 15120  –swift-tenant
 15121  
 15122  Tenant name - optional for v1 auth, this or tenant_id required otherwise
 15123  (OS_TENANT_NAME or OS_PROJECT_NAME)
 15124  
 15125  -   Config: tenant
 15126  -   Env Var: RCLONE_SWIFT_TENANT
 15127  -   Type: string
 15128  -   Default: ""
 15129  
 15130  –swift-tenant-id
 15131  
 15132  Tenant ID - optional for v1 auth, this or tenant required otherwise
 15133  (OS_TENANT_ID)
 15134  
 15135  -   Config: tenant_id
 15136  -   Env Var: RCLONE_SWIFT_TENANT_ID
 15137  -   Type: string
 15138  -   Default: ""
 15139  
 15140  –swift-tenant-domain
 15141  
 15142  Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
 15143  
 15144  -   Config: tenant_domain
 15145  -   Env Var: RCLONE_SWIFT_TENANT_DOMAIN
 15146  -   Type: string
 15147  -   Default: ""
 15148  
 15149  –swift-region
 15150  
 15151  Region name - optional (OS_REGION_NAME)
 15152  
 15153  -   Config: region
 15154  -   Env Var: RCLONE_SWIFT_REGION
 15155  -   Type: string
 15156  -   Default: ""
 15157  
 15158  –swift-storage-url
 15159  
 15160  Storage URL - optional (OS_STORAGE_URL)
 15161  
 15162  -   Config: storage_url
 15163  -   Env Var: RCLONE_SWIFT_STORAGE_URL
 15164  -   Type: string
 15165  -   Default: ""
 15166  
 15167  –swift-auth-token
 15168  
 15169  Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
 15170  
 15171  -   Config: auth_token
 15172  -   Env Var: RCLONE_SWIFT_AUTH_TOKEN
 15173  -   Type: string
 15174  -   Default: ""
 15175  
 15176  –swift-application-credential-id
 15177  
 15178  Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
 15179  
 15180  -   Config: application_credential_id
 15181  -   Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_ID
 15182  -   Type: string
 15183  -   Default: ""
 15184  
 15185  –swift-application-credential-name
 15186  
 15187  Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
 15188  
 15189  -   Config: application_credential_name
 15190  -   Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_NAME
 15191  -   Type: string
 15192  -   Default: ""
 15193  
 15194  –swift-application-credential-secret
 15195  
 15196  Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
 15197  
 15198  -   Config: application_credential_secret
 15199  -   Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_SECRET
 15200  -   Type: string
 15201  -   Default: ""
 15202  
 15203  –swift-auth-version
 15204  
 15205  AuthVersion - optional - set to (1,2,3) if your auth URL has no version
 15206  (ST_AUTH_VERSION)
 15207  
 15208  -   Config: auth_version
 15209  -   Env Var: RCLONE_SWIFT_AUTH_VERSION
 15210  -   Type: int
 15211  -   Default: 0
 15212  
 15213  –swift-endpoint-type
 15214  
 15215  Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE)
 15216  
 15217  -   Config: endpoint_type
 15218  -   Env Var: RCLONE_SWIFT_ENDPOINT_TYPE
 15219  -   Type: string
 15220  -   Default: “public”
 15221  -   Examples:
 15222      -   “public”
 15223          -   Public (default, choose this if not sure)
 15224      -   “internal”
 15225          -   Internal (use internal service net)
 15226      -   “admin”
 15227          -   Admin
 15228  
 15229  –swift-storage-policy
 15230  
 15231  The storage policy to use when creating a new container
 15232  
 15233  This applies the specified storage policy when creating a new container.
 15234  The policy cannot be changed afterwards. The allowed configuration
 15235  values and their meaning depend on your Swift storage provider.
 15236  
 15237  -   Config: storage_policy
 15238  -   Env Var: RCLONE_SWIFT_STORAGE_POLICY
 15239  -   Type: string
 15240  -   Default: ""
 15241  -   Examples:
 15242      -   ""
 15243          -   Default
 15244      -   “pcs”
 15245          -   OVH Public Cloud Storage
 15246      -   “pca”
 15247          -   OVH Public Cloud Archive
 15248  
 15249  Advanced Options
 15250  
 15251  Here are the advanced options specific to swift (Openstack Swift
 15252  (Rackspace Cloud Files, Memset Memstore, OVH)).
 15253  
 15254  –swift-chunk-size
 15255  
 15256  Above this size files will be chunked into a _segments container.
 15257  
 15258  Above this size files will be chunked into a _segments container. The
 15259  default for this is 5GB which is its maximum value.
 15260  
 15261  -   Config: chunk_size
 15262  -   Env Var: RCLONE_SWIFT_CHUNK_SIZE
 15263  -   Type: SizeSuffix
 15264  -   Default: 5G
 15265  
 15266  –swift-no-chunk
 15267  
 15268  Don’t chunk files during streaming upload.
 15269  
 15270  When doing streaming uploads (eg using rcat or mount) setting this flag
 15271  will cause the swift backend to not upload chunked files.
 15272  
 15273  This will limit the maximum upload size to 5GB. However non chunked
 15274  files are easier to deal with and have an MD5SUM.
 15275  
 15276  Rclone will still chunk files bigger than chunk_size when doing normal
 15277  copy operations.
 15278  
 15279  -   Config: no_chunk
 15280  -   Env Var: RCLONE_SWIFT_NO_CHUNK
 15281  -   Type: bool
 15282  -   Default: false
 15283  
 15284  Modified time
 15285  
 15286  The modified time is stored as metadata on the object as
 15287  X-Object-Meta-Mtime as floating point since the epoch accurate to 1 ns.
 15288  
 15289  This is a defacto standard (used in the official python-swiftclient
 15290  amongst others) for storing the modification time for an object.
 15291  
 15292  Limitations
 15293  
 15294  The Swift API doesn’t return a correct MD5SUM for segmented files
 15295  (Dynamic or Static Large Objects) so rclone won’t check or use the
 15296  MD5SUM for these.
 15297  
 15298  Troubleshooting
 15299  
 15300  Rclone gives Failed to create file system for “remote:”: Bad Request
 15301  
 15302  Due to an oddity of the underlying swift library, it gives a “Bad
 15303  Request” error rather than a more sensible error when the authentication
 15304  fails for Swift.
 15305  
 15306  So this most likely means your username / password is wrong. You can
 15307  investigate further with the --dump-bodies flag.
 15308  
 15309  This may also be caused by specifying the region when you shouldn’t have
 15310  (eg OVH).
 15311  
 15312  Rclone gives Failed to create file system: Response didn’t have storage storage url and auth token
 15313  
 15314  This is most likely caused by forgetting to specify your tenant when
 15315  setting up a swift remote.
 15316  
 15317  
 15318  pCloud
 15319  
 15320  Paths are specified as remote:path
 15321  
 15322  Paths may be as deep as required, eg remote:directory/subdirectory.
 15323  
 15324  The initial setup for pCloud involves getting a token from pCloud which
 15325  you need to do in your browser. rclone config walks you through it.
 15326  
 15327  Here is an example of how to make a remote called remote. First run:
 15328  
 15329       rclone config
 15330  
 15331  This will guide you through an interactive setup process:
 15332  
 15333      No remotes found - make a new one
 15334      n) New remote
 15335      s) Set configuration password
 15336      q) Quit config
 15337      n/s/q> n
 15338      name> remote
 15339      Type of storage to configure.
 15340      Choose a number from below, or type in your own value
 15341       1 / Amazon Drive
 15342         \ "amazon cloud drive"
 15343       2 / Amazon S3 (also Dreamhost, Ceph, Minio)
 15344         \ "s3"
 15345       3 / Backblaze B2
 15346         \ "b2"
 15347       4 / Box
 15348         \ "box"
 15349       5 / Dropbox
 15350         \ "dropbox"
 15351       6 / Encrypt/Decrypt a remote
 15352         \ "crypt"
 15353       7 / FTP Connection
 15354         \ "ftp"
 15355       8 / Google Cloud Storage (this is not Google Drive)
 15356         \ "google cloud storage"
 15357       9 / Google Drive
 15358         \ "drive"
 15359      10 / Hubic
 15360         \ "hubic"
 15361      11 / Local Disk
 15362         \ "local"
 15363      12 / Microsoft Azure Blob Storage
 15364         \ "azureblob"
 15365      13 / Microsoft OneDrive
 15366         \ "onedrive"
 15367      14 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
 15368         \ "swift"
 15369      15 / Pcloud
 15370         \ "pcloud"
 15371      16 / QingCloud Object Storage
 15372         \ "qingstor"
 15373      17 / SSH/SFTP Connection
 15374         \ "sftp"
 15375      18 / Yandex Disk
 15376         \ "yandex"
 15377      19 / http Connection
 15378         \ "http"
 15379      Storage> pcloud
 15380      Pcloud App Client Id - leave blank normally.
 15381      client_id> 
 15382      Pcloud App Client Secret - leave blank normally.
 15383      client_secret> 
 15384      Remote config
 15385      Use auto config?
 15386       * Say Y if not sure
 15387       * Say N if you are working on a remote or headless machine
 15388      y) Yes
 15389      n) No
 15390      y/n> y
 15391      If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
 15392      Log in and authorize rclone for access
 15393      Waiting for code...
 15394      Got code
 15395      --------------------
 15396      [remote]
 15397      client_id = 
 15398      client_secret = 
 15399      token = {"access_token":"XXX","token_type":"bearer","expiry":"0001-01-01T00:00:00Z"}
 15400      --------------------
 15401      y) Yes this is OK
 15402      e) Edit this remote
 15403      d) Delete this remote
 15404      y/e/d> y
 15405  
 15406  See the remote setup docs for how to set it up on a machine with no
 15407  Internet browser available.
 15408  
 15409  Note that rclone runs a webserver on your local machine to collect the
 15410  token as returned from pCloud. This only runs from the moment it opens
 15411  your browser to the moment you get back the verification code. This is
 15412  on http://127.0.0.1:53682/ and this it may require you to unblock it
 15413  temporarily if you are running a host firewall.
 15414  
 15415  Once configured you can then use rclone like this,
 15416  
 15417  List directories in top level of your pCloud
 15418  
 15419      rclone lsd remote:
 15420  
 15421  List all the files in your pCloud
 15422  
 15423      rclone ls remote:
 15424  
 15425  To copy a local directory to an pCloud directory called backup
 15426  
 15427      rclone copy /home/source remote:backup
 15428  
 15429  Modified time and hashes
 15430  
 15431  pCloud allows modification times to be set on objects accurate to 1
 15432  second. These will be used to detect whether objects need syncing or
 15433  not. In order to set a Modification time pCloud requires the object be
 15434  re-uploaded.
 15435  
 15436  pCloud supports MD5 and SHA1 type hashes, so you can use the --checksum
 15437  flag.
 15438  
 15439  Deleting files
 15440  
 15441  Deleted files will be moved to the trash. Your subscription level will
 15442  determine how long items stay in the trash. rclone cleanup can be used
 15443  to empty the trash.
 15444  
 15445  Standard Options
 15446  
 15447  Here are the standard options specific to pcloud (Pcloud).
 15448  
 15449  –pcloud-client-id
 15450  
 15451  Pcloud App Client Id Leave blank normally.
 15452  
 15453  -   Config: client_id
 15454  -   Env Var: RCLONE_PCLOUD_CLIENT_ID
 15455  -   Type: string
 15456  -   Default: ""
 15457  
 15458  –pcloud-client-secret
 15459  
 15460  Pcloud App Client Secret Leave blank normally.
 15461  
 15462  -   Config: client_secret
 15463  -   Env Var: RCLONE_PCLOUD_CLIENT_SECRET
 15464  -   Type: string
 15465  -   Default: ""
 15466  
 15467  
 15468  SFTP
 15469  
 15470  SFTP is the Secure (or SSH) File Transfer Protocol.
 15471  
 15472  SFTP runs over SSH v2 and is installed as standard with most modern SSH
 15473  installations.
 15474  
 15475  Paths are specified as remote:path. If the path does not begin with a /
 15476  it is relative to the home directory of the user. An empty path remote:
 15477  refers to the user’s home directory.
 15478  
 15479  "Note that some SFTP servers will need the leading / - Synology is a
 15480  good example of this. rsync.net, on the other hand, requires users to
 15481  OMIT the leading /.
 15482  
 15483  Here is an example of making an SFTP configuration. First run
 15484  
 15485      rclone config
 15486  
 15487  This will guide you through an interactive setup process.
 15488  
 15489      No remotes found - make a new one
 15490      n) New remote
 15491      s) Set configuration password
 15492      q) Quit config
 15493      n/s/q> n
 15494      name> remote
 15495      Type of storage to configure.
 15496      Choose a number from below, or type in your own value
 15497       1 / Amazon Drive
 15498         \ "amazon cloud drive"
 15499       2 / Amazon S3 (also Dreamhost, Ceph, Minio)
 15500         \ "s3"
 15501       3 / Backblaze B2
 15502         \ "b2"
 15503       4 / Dropbox
 15504         \ "dropbox"
 15505       5 / Encrypt/Decrypt a remote
 15506         \ "crypt"
 15507       6 / FTP Connection
 15508         \ "ftp"
 15509       7 / Google Cloud Storage (this is not Google Drive)
 15510         \ "google cloud storage"
 15511       8 / Google Drive
 15512         \ "drive"
 15513       9 / Hubic
 15514         \ "hubic"
 15515      10 / Local Disk
 15516         \ "local"
 15517      11 / Microsoft OneDrive
 15518         \ "onedrive"
 15519      12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
 15520         \ "swift"
 15521      13 / SSH/SFTP Connection
 15522         \ "sftp"
 15523      14 / Yandex Disk
 15524         \ "yandex"
 15525      15 / http Connection
 15526         \ "http"
 15527      Storage> sftp
 15528      SSH host to connect to
 15529      Choose a number from below, or type in your own value
 15530       1 / Connect to example.com
 15531         \ "example.com"
 15532      host> example.com
 15533      SSH username, leave blank for current username, ncw
 15534      user> sftpuser
 15535      SSH port, leave blank to use default (22)
 15536      port> 
 15537      SSH password, leave blank to use ssh-agent.
 15538      y) Yes type in my own password
 15539      g) Generate random password
 15540      n) No leave this optional password blank
 15541      y/g/n> n
 15542      Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
 15543      key_file> 
 15544      Remote config
 15545      --------------------
 15546      [remote]
 15547      host = example.com
 15548      user = sftpuser
 15549      port = 
 15550      pass = 
 15551      key_file = 
 15552      --------------------
 15553      y) Yes this is OK
 15554      e) Edit this remote
 15555      d) Delete this remote
 15556      y/e/d> y
 15557  
 15558  This remote is called remote and can now be used like this:
 15559  
 15560  See all directories in the home directory
 15561  
 15562      rclone lsd remote:
 15563  
 15564  Make a new directory
 15565  
 15566      rclone mkdir remote:path/to/directory
 15567  
 15568  List the contents of a directory
 15569  
 15570      rclone ls remote:path/to/directory
 15571  
 15572  Sync /home/local/directory to the remote directory, deleting any excess
 15573  files in the directory.
 15574  
 15575      rclone sync /home/local/directory remote:directory
 15576  
 15577  SSH Authentication
 15578  
 15579  The SFTP remote supports three authentication methods:
 15580  
 15581  -   Password
 15582  -   Key file
 15583  -   ssh-agent
 15584  
 15585  Key files should be PEM-encoded private key files. For instance
 15586  /home/$USER/.ssh/id_rsa. Only unencrypted OpenSSH or PEM encrypted files
 15587  are supported.
 15588  
 15589  If you don’t specify pass or key_file then rclone will attempt to
 15590  contact an ssh-agent.
 15591  
 15592  You can also specify key_use_agent to force the usage of an ssh-agent.
 15593  In this case key_file can also be specified to force the usage of a
 15594  specific key in the ssh-agent.
 15595  
 15596  Using an ssh-agent is the only way to load encrypted OpenSSH keys at the
 15597  moment.
 15598  
 15599  If you set the --sftp-ask-password option, rclone will prompt for a
 15600  password when needed and no password has been configured.
 15601  
 15602  ssh-agent on macOS
 15603  
 15604  Note that there seem to be various problems with using an ssh-agent on
 15605  macOS due to recent changes in the OS. The most effective work-around
 15606  seems to be to start an ssh-agent in each session, eg
 15607  
 15608      eval `ssh-agent -s` && ssh-add -A
 15609  
 15610  And then at the end of the session
 15611  
 15612      eval `ssh-agent -k`
 15613  
 15614  These commands can be used in scripts of course.
 15615  
 15616  Modified time
 15617  
 15618  Modified times are stored on the server to 1 second precision.
 15619  
 15620  Modified times are used in syncing and are fully supported.
 15621  
 15622  Some SFTP servers disable setting/modifying the file modification time
 15623  after upload (for example, certain configurations of ProFTPd with
 15624  mod_sftp). If you are using one of these servers, you can set the option
 15625  set_modtime = false in your RClone backend configuration to disable this
 15626  behaviour.
 15627  
 15628  Standard Options
 15629  
 15630  Here are the standard options specific to sftp (SSH/SFTP Connection).
 15631  
 15632  –sftp-host
 15633  
 15634  SSH host to connect to
 15635  
 15636  -   Config: host
 15637  -   Env Var: RCLONE_SFTP_HOST
 15638  -   Type: string
 15639  -   Default: ""
 15640  -   Examples:
 15641      -   “example.com”
 15642          -   Connect to example.com
 15643  
 15644  –sftp-user
 15645  
 15646  SSH username, leave blank for current username, ncw
 15647  
 15648  -   Config: user
 15649  -   Env Var: RCLONE_SFTP_USER
 15650  -   Type: string
 15651  -   Default: ""
 15652  
 15653  –sftp-port
 15654  
 15655  SSH port, leave blank to use default (22)
 15656  
 15657  -   Config: port
 15658  -   Env Var: RCLONE_SFTP_PORT
 15659  -   Type: string
 15660  -   Default: ""
 15661  
 15662  –sftp-pass
 15663  
 15664  SSH password, leave blank to use ssh-agent.
 15665  
 15666  -   Config: pass
 15667  -   Env Var: RCLONE_SFTP_PASS
 15668  -   Type: string
 15669  -   Default: ""
 15670  
 15671  –sftp-key-file
 15672  
 15673  Path to PEM-encoded private key file, leave blank or set key-use-agent
 15674  to use ssh-agent.
 15675  
 15676  -   Config: key_file
 15677  -   Env Var: RCLONE_SFTP_KEY_FILE
 15678  -   Type: string
 15679  -   Default: ""
 15680  
 15681  –sftp-key-file-pass
 15682  
 15683  The passphrase to decrypt the PEM-encoded private key file.
 15684  
 15685  Only PEM encrypted key files (old OpenSSH format) are supported.
 15686  Encrypted keys in the new OpenSSH format can’t be used.
 15687  
 15688  -   Config: key_file_pass
 15689  -   Env Var: RCLONE_SFTP_KEY_FILE_PASS
 15690  -   Type: string
 15691  -   Default: ""
 15692  
 15693  –sftp-key-use-agent
 15694  
 15695  When set forces the usage of the ssh-agent.
 15696  
 15697  When key-file is also set, the “.pub” file of the specified key-file is
 15698  read and only the associated key is requested from the ssh-agent. This
 15699  allows to avoid Too many authentication failures for *username* errors
 15700  when the ssh-agent contains many keys.
 15701  
 15702  -   Config: key_use_agent
 15703  -   Env Var: RCLONE_SFTP_KEY_USE_AGENT
 15704  -   Type: bool
 15705  -   Default: false
 15706  
 15707  –sftp-use-insecure-cipher
 15708  
 15709  Enable the use of the aes128-cbc cipher. This cipher is insecure and may
 15710  allow plaintext data to be recovered by an attacker.
 15711  
 15712  -   Config: use_insecure_cipher
 15713  -   Env Var: RCLONE_SFTP_USE_INSECURE_CIPHER
 15714  -   Type: bool
 15715  -   Default: false
 15716  -   Examples:
 15717      -   “false”
 15718          -   Use default Cipher list.
 15719      -   “true”
 15720          -   Enables the use of the aes128-cbc cipher.
 15721  
 15722  –sftp-disable-hashcheck
 15723  
 15724  Disable the execution of SSH commands to determine if remote file
 15725  hashing is available. Leave blank or set to false to enable hashing
 15726  (recommended), set to true to disable hashing.
 15727  
 15728  -   Config: disable_hashcheck
 15729  -   Env Var: RCLONE_SFTP_DISABLE_HASHCHECK
 15730  -   Type: bool
 15731  -   Default: false
 15732  
 15733  Advanced Options
 15734  
 15735  Here are the advanced options specific to sftp (SSH/SFTP Connection).
 15736  
 15737  –sftp-ask-password
 15738  
 15739  Allow asking for SFTP password when needed.
 15740  
 15741  -   Config: ask_password
 15742  -   Env Var: RCLONE_SFTP_ASK_PASSWORD
 15743  -   Type: bool
 15744  -   Default: false
 15745  
 15746  –sftp-path-override
 15747  
 15748  Override path used by SSH connection.
 15749  
 15750  This allows checksum calculation when SFTP and SSH paths are different.
 15751  This issue affects among others Synology NAS boxes.
 15752  
 15753  Shared folders can be found in directories representing volumes
 15754  
 15755      rclone sync /home/local/directory remote:/directory --ssh-path-override /volume2/directory
 15756  
 15757  Home directory can be found in a shared folder called “home”
 15758  
 15759      rclone sync /home/local/directory remote:/home/directory --ssh-path-override /volume1/homes/USER/directory
 15760  
 15761  -   Config: path_override
 15762  -   Env Var: RCLONE_SFTP_PATH_OVERRIDE
 15763  -   Type: string
 15764  -   Default: ""
 15765  
 15766  –sftp-set-modtime
 15767  
 15768  Set the modified time on the remote if set.
 15769  
 15770  -   Config: set_modtime
 15771  -   Env Var: RCLONE_SFTP_SET_MODTIME
 15772  -   Type: bool
 15773  -   Default: true
 15774  
 15775  Limitations
 15776  
 15777  SFTP supports checksums if the same login has shell access and md5sum or
 15778  sha1sum as well as echo are in the remote’s PATH. This remote
 15779  checksumming (file hashing) is recommended and enabled by default.
 15780  Disabling the checksumming may be required if you are connecting to SFTP
 15781  servers which are not under your control, and to which the execution of
 15782  remote commands is prohibited. Set the configuration option
 15783  disable_hashcheck to true to disable checksumming.
 15784  
 15785  SFTP also supports about if the same login has shell access and df are
 15786  in the remote’s PATH. about will return the total space, free space, and
 15787  used space on the remote for the disk of the specified path on the
 15788  remote or, if not set, the disk of the root on the remote. about will
 15789  fail if it does not have shell access or if df is not in the remote’s
 15790  PATH.
 15791  
 15792  Note that some SFTP servers (eg Synology) the paths are different for
 15793  SSH and SFTP so the hashes can’t be calculated properly. For them using
 15794  disable_hashcheck is a good idea.
 15795  
 15796  The only ssh agent supported under Windows is Putty’s pageant.
 15797  
 15798  The Go SSH library disables the use of the aes128-cbc cipher by default,
 15799  due to security concerns. This can be re-enabled on a per-connection
 15800  basis by setting the use_insecure_cipher setting in the configuration
 15801  file to true. Further details on the insecurity of this cipher can be
 15802  found [in this paper] (http://www.isg.rhul.ac.uk/~kp/SandPfinal.pdf).
 15803  
 15804  SFTP isn’t supported under plan9 until this issue is fixed.
 15805  
 15806  Note that since SFTP isn’t HTTP based the following flags don’t work
 15807  with it: --dump-headers, --dump-bodies, --dump-auth
 15808  
 15809  Note that --timeout isn’t supported (but --contimeout is).
 15810  
 15811  
 15812  Union
 15813  
 15814  The union remote provides a unification similar to UnionFS using other
 15815  remotes.
 15816  
 15817  Paths may be as deep as required or a local path, eg
 15818  remote:directory/subdirectory or /directory/subdirectory.
 15819  
 15820  During the initial setup with rclone config you will specify the target
 15821  remotes as a space separated list. The target remotes can either be a
 15822  local paths or other remotes.
 15823  
 15824  The order of the remotes is important as it defines which remotes take
 15825  precedence over others if there are files with the same name in the same
 15826  logical path. The last remote is the topmost remote and replaces files
 15827  with the same name from previous remotes.
 15828  
 15829  Only the last remote is used to write to and delete from, all other
 15830  remotes are read-only.
 15831  
 15832  Subfolders can be used in target remote. Assume a union remote named
 15833  backup with the remotes mydrive:private/backup mydrive2:/backup.
 15834  Invoking rclone mkdir backup:desktop is exactly the same as invoking
 15835  rclone mkdir mydrive2:/backup/desktop.
 15836  
 15837  There will be no special handling of paths containing .. segments.
 15838  Invoking rclone mkdir backup:../desktop is exactly the same as invoking
 15839  rclone mkdir mydrive2:/backup/../desktop.
 15840  
 15841  Here is an example of how to make a union called remote for local
 15842  folders. First run:
 15843  
 15844       rclone config
 15845  
 15846  This will guide you through an interactive setup process:
 15847  
 15848      No remotes found - make a new one
 15849      n) New remote
 15850      s) Set configuration password
 15851      q) Quit config
 15852      n/s/q> n
 15853      name> remote
 15854      Type of storage to configure.
 15855      Choose a number from below, or type in your own value
 15856       1 / Alias for an existing remote
 15857         \ "alias"
 15858       2 / Amazon Drive
 15859         \ "amazon cloud drive"
 15860       3 / Amazon S3 Compliant Storage Providers (AWS, Ceph, Dreamhost, IBM COS, Minio)
 15861         \ "s3"
 15862       4 / Backblaze B2
 15863         \ "b2"
 15864       5 / Box
 15865         \ "box"
 15866       6 / Builds a stackable unification remote, which can appear to merge the contents of several remotes
 15867         \ "union"
 15868       7 / Cache a remote
 15869         \ "cache"
 15870       8 / Dropbox
 15871         \ "dropbox"
 15872       9 / Encrypt/Decrypt a remote
 15873         \ "crypt"
 15874      10 / FTP Connection
 15875         \ "ftp"
 15876      11 / Google Cloud Storage (this is not Google Drive)
 15877         \ "google cloud storage"
 15878      12 / Google Drive
 15879         \ "drive"
 15880      13 / Hubic
 15881         \ "hubic"
 15882      14 / JottaCloud
 15883         \ "jottacloud"
 15884      15 / Local Disk
 15885         \ "local"
 15886      16 / Mega
 15887         \ "mega"
 15888      17 / Microsoft Azure Blob Storage
 15889         \ "azureblob"
 15890      18 / Microsoft OneDrive
 15891         \ "onedrive"
 15892      19 / OpenDrive
 15893         \ "opendrive"
 15894      20 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
 15895         \ "swift"
 15896      21 / Pcloud
 15897         \ "pcloud"
 15898      22 / QingCloud Object Storage
 15899         \ "qingstor"
 15900      23 / SSH/SFTP Connection
 15901         \ "sftp"
 15902      24 / Webdav
 15903         \ "webdav"
 15904      25 / Yandex Disk
 15905         \ "yandex"
 15906      26 / http Connection
 15907         \ "http"
 15908      Storage> union
 15909      List of space separated remotes.
 15910      Can be 'remotea:test/dir remoteb:', '"remotea:test/space dir" remoteb:', etc.
 15911      The last remote is used to write to.
 15912      Enter a string value. Press Enter for the default ("").
 15913      remotes>
 15914      Remote config
 15915      --------------------
 15916      [remote]
 15917      type = union
 15918      remotes = C:\dir1 C:\dir2 C:\dir3
 15919      --------------------
 15920      y) Yes this is OK
 15921      e) Edit this remote
 15922      d) Delete this remote
 15923      y/e/d> y
 15924      Current remotes:
 15925  
 15926      Name                 Type
 15927      ====                 ====
 15928      remote               union
 15929  
 15930      e) Edit existing remote
 15931      n) New remote
 15932      d) Delete remote
 15933      r) Rename remote
 15934      c) Copy remote
 15935      s) Set configuration password
 15936      q) Quit config
 15937      e/n/d/r/c/s/q> q
 15938  
 15939  Once configured you can then use rclone like this,
 15940  
 15941  List directories in top level in C:\dir1, C:\dir2 and C:\dir3
 15942  
 15943      rclone lsd remote:
 15944  
 15945  List all the files in C:\dir1, C:\dir2 and C:\dir3
 15946  
 15947      rclone ls remote:
 15948  
 15949  Copy another local directory to the union directory called source, which
 15950  will be placed into C:\dir3
 15951  
 15952      rclone copy C:\source remote:source
 15953  
 15954  Standard Options
 15955  
 15956  Here are the standard options specific to union (A stackable unification
 15957  remote, which can appear to merge the contents of several remotes).
 15958  
 15959  –union-remotes
 15960  
 15961  List of space separated remotes. Can be ‘remotea:test/dir remoteb:’,
 15962  ‘“remotea:test/space dir” remoteb:’, etc. The last remote is used to
 15963  write to.
 15964  
 15965  -   Config: remotes
 15966  -   Env Var: RCLONE_UNION_REMOTES
 15967  -   Type: string
 15968  -   Default: ""
 15969  
 15970  
 15971  WebDAV
 15972  
 15973  Paths are specified as remote:path
 15974  
 15975  Paths may be as deep as required, eg remote:directory/subdirectory.
 15976  
 15977  To configure the WebDAV remote you will need to have a URL for it, and a
 15978  username and password. If you know what kind of system you are
 15979  connecting to then rclone can enable extra features.
 15980  
 15981  Here is an example of how to make a remote called remote. First run:
 15982  
 15983       rclone config
 15984  
 15985  This will guide you through an interactive setup process:
 15986  
 15987      No remotes found - make a new one
 15988      n) New remote
 15989      s) Set configuration password
 15990      q) Quit config
 15991      n/s/q> n
 15992      name> remote
 15993      Type of storage to configure.
 15994      Choose a number from below, or type in your own value
 15995      [snip]
 15996      22 / Webdav
 15997         \ "webdav"
 15998      [snip]
 15999      Storage> webdav
 16000      URL of http host to connect to
 16001      Choose a number from below, or type in your own value
 16002       1 / Connect to example.com
 16003         \ "https://example.com"
 16004      url> https://example.com/remote.php/webdav/
 16005      Name of the Webdav site/service/software you are using
 16006      Choose a number from below, or type in your own value
 16007       1 / Nextcloud
 16008         \ "nextcloud"
 16009       2 / Owncloud
 16010         \ "owncloud"
 16011       3 / Sharepoint
 16012         \ "sharepoint"
 16013       4 / Other site/service or software
 16014         \ "other"
 16015      vendor> 1
 16016      User name
 16017      user> user
 16018      Password.
 16019      y) Yes type in my own password
 16020      g) Generate random password
 16021      n) No leave this optional password blank
 16022      y/g/n> y
 16023      Enter the password:
 16024      password:
 16025      Confirm the password:
 16026      password:
 16027      Bearer token instead of user/pass (eg a Macaroon)
 16028      bearer_token> 
 16029      Remote config
 16030      --------------------
 16031      [remote]
 16032      type = webdav
 16033      url = https://example.com/remote.php/webdav/
 16034      vendor = nextcloud
 16035      user = user
 16036      pass = *** ENCRYPTED ***
 16037      bearer_token = 
 16038      --------------------
 16039      y) Yes this is OK
 16040      e) Edit this remote
 16041      d) Delete this remote
 16042      y/e/d> y
 16043  
 16044  Once configured you can then use rclone like this,
 16045  
 16046  List directories in top level of your WebDAV
 16047  
 16048      rclone lsd remote:
 16049  
 16050  List all the files in your WebDAV
 16051  
 16052      rclone ls remote:
 16053  
 16054  To copy a local directory to an WebDAV directory called backup
 16055  
 16056      rclone copy /home/source remote:backup
 16057  
 16058  Modified time and hashes
 16059  
 16060  Plain WebDAV does not support modified times. However when used with
 16061  Owncloud or Nextcloud rclone will support modified times.
 16062  
 16063  Likewise plain WebDAV does not support hashes, however when used with
 16064  Owncloud or Nextcloud rclone will support SHA1 and MD5 hashes. Depending
 16065  on the exact version of Owncloud or Nextcloud hashes may appear on all
 16066  objects, or only on objects which had a hash uploaded with them.
 16067  
 16068  Standard Options
 16069  
 16070  Here are the standard options specific to webdav (Webdav).
 16071  
 16072  –webdav-url
 16073  
 16074  URL of http host to connect to
 16075  
 16076  -   Config: url
 16077  -   Env Var: RCLONE_WEBDAV_URL
 16078  -   Type: string
 16079  -   Default: ""
 16080  -   Examples:
 16081      -   “https://example.com”
 16082          -   Connect to example.com
 16083  
 16084  –webdav-vendor
 16085  
 16086  Name of the Webdav site/service/software you are using
 16087  
 16088  -   Config: vendor
 16089  -   Env Var: RCLONE_WEBDAV_VENDOR
 16090  -   Type: string
 16091  -   Default: ""
 16092  -   Examples:
 16093      -   “nextcloud”
 16094          -   Nextcloud
 16095      -   “owncloud”
 16096          -   Owncloud
 16097      -   “sharepoint”
 16098          -   Sharepoint
 16099      -   “other”
 16100          -   Other site/service or software
 16101  
 16102  –webdav-user
 16103  
 16104  User name
 16105  
 16106  -   Config: user
 16107  -   Env Var: RCLONE_WEBDAV_USER
 16108  -   Type: string
 16109  -   Default: ""
 16110  
 16111  –webdav-pass
 16112  
 16113  Password.
 16114  
 16115  -   Config: pass
 16116  -   Env Var: RCLONE_WEBDAV_PASS
 16117  -   Type: string
 16118  -   Default: ""
 16119  
 16120  –webdav-bearer-token
 16121  
 16122  Bearer token instead of user/pass (eg a Macaroon)
 16123  
 16124  -   Config: bearer_token
 16125  -   Env Var: RCLONE_WEBDAV_BEARER_TOKEN
 16126  -   Type: string
 16127  -   Default: ""
 16128  
 16129  
 16130  Provider notes
 16131  
 16132  See below for notes on specific providers.
 16133  
 16134  Owncloud
 16135  
 16136  Click on the settings cog in the bottom right of the page and this will
 16137  show the WebDAV URL that rclone needs in the config step. It will look
 16138  something like https://example.com/remote.php/webdav/.
 16139  
 16140  Owncloud supports modified times using the X-OC-Mtime header.
 16141  
 16142  Nextcloud
 16143  
 16144  This is configured in an identical way to Owncloud. Note that Nextcloud
 16145  does not support streaming of files (rcat) whereas Owncloud does. This
 16146  may be fixed in the future.
 16147  
 16148  Put.io
 16149  
 16150  put.io can be accessed in a read only way using webdav.
 16151  
 16152  Configure the url as https://webdav.put.io and use your normal account
 16153  username and password for user and pass. Set the vendor to other.
 16154  
 16155  Your config file should end up looking like this:
 16156  
 16157      [putio]
 16158      type = webdav
 16159      url = https://webdav.put.io
 16160      vendor = other
 16161      user = YourUserName
 16162      pass = encryptedpassword
 16163  
 16164  If you are using put.io with rclone mount then use the --read-only flag
 16165  to signal to the OS that it can’t write to the mount.
 16166  
 16167  For more help see the put.io webdav docs.
 16168  
 16169  Sharepoint
 16170  
 16171  Rclone can be used with Sharepoint provided by OneDrive for Business or
 16172  Office365 Education Accounts. This feature is only needed for a few of
 16173  these Accounts, mostly Office365 Education ones. These accounts are
 16174  sometimes not verified by the domain owner github#1975
 16175  
 16176  This means that these accounts can’t be added using the official API
 16177  (other Accounts should work with the “onedrive” option). However, it is
 16178  possible to access them using webdav.
 16179  
 16180  To use a sharepoint remote with rclone, add it like this: First, you
 16181  need to get your remote’s URL:
 16182  
 16183  -   Go here to open your OneDrive or to sign in
 16184  -   Now take a look at your address bar, the URL should look like this:
 16185      https://[YOUR-DOMAIN]-my.sharepoint.com/personal/[YOUR-EMAIL]/_layouts/15/onedrive.aspx
 16186  
 16187  You’ll only need this URL upto the email address. After that, you’ll
 16188  most likely want to add “/Documents”. That subdirectory contains the
 16189  actual data stored on your OneDrive.
 16190  
 16191  Add the remote to rclone like this: Configure the url as
 16192  https://[YOUR-DOMAIN]-my.sharepoint.com/personal/[YOUR-EMAIL]/Documents
 16193  and use your normal account email and password for user and pass. If you
 16194  have 2FA enabled, you have to generate an app password. Set the vendor
 16195  to sharepoint.
 16196  
 16197  Your config file should look like this:
 16198  
 16199      [sharepoint]
 16200      type = webdav
 16201      url = https://[YOUR-DOMAIN]-my.sharepoint.com/personal/[YOUR-EMAIL]/Documents
 16202      vendor = other
 16203      user = YourEmailAddress
 16204      pass = encryptedpassword
 16205  
 16206  dCache
 16207  
 16208  dCache is a storage system with WebDAV doors that support, beside basic
 16209  and x509, authentication with Macaroons (bearer tokens).
 16210  
 16211  Configure as normal using the other type. Don’t enter a username or
 16212  password, instead enter your Macaroon as the bearer_token.
 16213  
 16214  The config will end up looking something like this.
 16215  
 16216      [dcache]
 16217      type = webdav
 16218      url = https://dcache...
 16219      vendor = other
 16220      user =
 16221      pass =
 16222      bearer_token = your-macaroon
 16223  
 16224  There is a script that obtains a Macaroon from a dCache WebDAV endpoint,
 16225  and creates an rclone config file.
 16226  
 16227  
 16228  Yandex Disk
 16229  
 16230  Yandex Disk is a cloud storage solution created by Yandex.
 16231  
 16232  Yandex paths may be as deep as required, eg
 16233  remote:directory/subdirectory.
 16234  
 16235  Here is an example of making a yandex configuration. First run
 16236  
 16237      rclone config
 16238  
 16239  This will guide you through an interactive setup process:
 16240  
 16241      No remotes found - make a new one
 16242      n) New remote
 16243      s) Set configuration password
 16244      n/s> n
 16245      name> remote
 16246      Type of storage to configure.
 16247      Choose a number from below, or type in your own value
 16248       1 / Amazon Drive
 16249         \ "amazon cloud drive"
 16250       2 / Amazon S3 (also Dreamhost, Ceph, Minio)
 16251         \ "s3"
 16252       3 / Backblaze B2
 16253         \ "b2"
 16254       4 / Dropbox
 16255         \ "dropbox"
 16256       5 / Encrypt/Decrypt a remote
 16257         \ "crypt"
 16258       6 / Google Cloud Storage (this is not Google Drive)
 16259         \ "google cloud storage"
 16260       7 / Google Drive
 16261         \ "drive"
 16262       8 / Hubic
 16263         \ "hubic"
 16264       9 / Local Disk
 16265         \ "local"
 16266      10 / Microsoft OneDrive
 16267         \ "onedrive"
 16268      11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
 16269         \ "swift"
 16270      12 / SSH/SFTP Connection
 16271         \ "sftp"
 16272      13 / Yandex Disk
 16273         \ "yandex"
 16274      Storage> 13
 16275      Yandex Client Id - leave blank normally.
 16276      client_id>
 16277      Yandex Client Secret - leave blank normally.
 16278      client_secret>
 16279      Remote config
 16280      Use auto config?
 16281       * Say Y if not sure
 16282       * Say N if you are working on a remote or headless machine
 16283      y) Yes
 16284      n) No
 16285      y/n> y
 16286      If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
 16287      Log in and authorize rclone for access
 16288      Waiting for code...
 16289      Got code
 16290      --------------------
 16291      [remote]
 16292      client_id =
 16293      client_secret =
 16294      token = {"access_token":"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","token_type":"bearer","expiry":"2016-12-29T12:27:11.362788025Z"}
 16295      --------------------
 16296      y) Yes this is OK
 16297      e) Edit this remote
 16298      d) Delete this remote
 16299      y/e/d> y
 16300  
 16301  See the remote setup docs for how to set it up on a machine with no
 16302  Internet browser available.
 16303  
 16304  Note that rclone runs a webserver on your local machine to collect the
 16305  token as returned from Yandex Disk. This only runs from the moment it
 16306  opens your browser to the moment you get back the verification code.
 16307  This is on http://127.0.0.1:53682/ and this it may require you to
 16308  unblock it temporarily if you are running a host firewall.
 16309  
 16310  Once configured you can then use rclone like this,
 16311  
 16312  See top level directories
 16313  
 16314      rclone lsd remote:
 16315  
 16316  Make a new directory
 16317  
 16318      rclone mkdir remote:directory
 16319  
 16320  List the contents of a directory
 16321  
 16322      rclone ls remote:directory
 16323  
 16324  Sync /home/local/directory to the remote path, deleting any excess files
 16325  in the path.
 16326  
 16327      rclone sync /home/local/directory remote:directory
 16328  
 16329  Modified time
 16330  
 16331  Modified times are supported and are stored accurate to 1 ns in custom
 16332  metadata called rclone_modified in RFC3339 with nanoseconds format.
 16333  
 16334  MD5 checksums
 16335  
 16336  MD5 checksums are natively supported by Yandex Disk.
 16337  
 16338  Emptying Trash
 16339  
 16340  If you wish to empty your trash you can use the rclone cleanup remote:
 16341  command which will permanently delete all your trashed files. This
 16342  command does not take any path arguments.
 16343  
 16344  Quota information
 16345  
 16346  To view your current quota you can use the rclone about remote: command
 16347  which will display your usage limit (quota) and the current usage.
 16348  
 16349  Limitations
 16350  
 16351  When uploading very large files (bigger than about 5GB) you will need to
 16352  increase the --timeout parameter. This is because Yandex pauses (perhaps
 16353  to calculate the MD5SUM for the entire file) before returning
 16354  confirmation that the file has been uploaded. The default handling of
 16355  timeouts in rclone is to assume a 5 minute pause is an error and close
 16356  the connection - you’ll see net/http: timeout awaiting response headers
 16357  errors in the logs if this is happening. Setting the timeout to twice
 16358  the max size of file in GB should be enough, so if you want to upload a
 16359  30GB file set a timeout of 2 * 30 = 60m, that is --timeout 60m.
 16360  
 16361  Standard Options
 16362  
 16363  Here are the standard options specific to yandex (Yandex Disk).
 16364  
 16365  –yandex-client-id
 16366  
 16367  Yandex Client Id Leave blank normally.
 16368  
 16369  -   Config: client_id
 16370  -   Env Var: RCLONE_YANDEX_CLIENT_ID
 16371  -   Type: string
 16372  -   Default: ""
 16373  
 16374  –yandex-client-secret
 16375  
 16376  Yandex Client Secret Leave blank normally.
 16377  
 16378  -   Config: client_secret
 16379  -   Env Var: RCLONE_YANDEX_CLIENT_SECRET
 16380  -   Type: string
 16381  -   Default: ""
 16382  
 16383  Advanced Options
 16384  
 16385  Here are the advanced options specific to yandex (Yandex Disk).
 16386  
 16387  –yandex-unlink
 16388  
 16389  Remove existing public link to file/folder with link command rather than
 16390  creating. Default is false, meaning link command will create or retrieve
 16391  public link.
 16392  
 16393  -   Config: unlink
 16394  -   Env Var: RCLONE_YANDEX_UNLINK
 16395  -   Type: bool
 16396  -   Default: false
 16397  
 16398  
 16399  Local Filesystem
 16400  
 16401  Local paths are specified as normal filesystem paths, eg
 16402  /path/to/wherever, so
 16403  
 16404      rclone sync /home/source /tmp/destination
 16405  
 16406  Will sync /home/source to /tmp/destination
 16407  
 16408  These can be configured into the config file for consistencies sake, but
 16409  it is probably easier not to.
 16410  
 16411  Modified time
 16412  
 16413  Rclone reads and writes the modified time using an accuracy determined
 16414  by the OS. Typically this is 1ns on Linux, 10 ns on Windows and 1 Second
 16415  on OS X.
 16416  
 16417  Filenames
 16418  
 16419  Filenames are expected to be encoded in UTF-8 on disk. This is the
 16420  normal case for Windows and OS X.
 16421  
 16422  There is a bit more uncertainty in the Linux world, but new
 16423  distributions will have UTF-8 encoded files names. If you are using an
 16424  old Linux filesystem with non UTF-8 file names (eg latin1) then you can
 16425  use the convmv tool to convert the filesystem to UTF-8. This tool is
 16426  available in most distributions’ package managers.
 16427  
 16428  If an invalid (non-UTF8) filename is read, the invalid characters will
 16429  be replaced with the unicode replacement character, ‘�’. rclone will
 16430  emit a debug message in this case (use -v to see), eg
 16431  
 16432      Local file system at .: Replacing invalid UTF-8 characters in "gro\xdf"
 16433  
 16434  Long paths on Windows
 16435  
 16436  Rclone handles long paths automatically, by converting all paths to long
 16437  UNC paths which allows paths up to 32,767 characters.
 16438  
 16439  This is why you will see that your paths, for instance c:\files is
 16440  converted to the UNC path \\?\c:\files in the output, and \\server\share
 16441  is converted to \\?\UNC\server\share.
 16442  
 16443  However, in rare cases this may cause problems with buggy file system
 16444  drivers like EncFS. To disable UNC conversion globally, add this to your
 16445  .rclone.conf file:
 16446  
 16447      [local]
 16448      nounc = true
 16449  
 16450  If you want to selectively disable UNC, you can add it to a separate
 16451  entry like this:
 16452  
 16453      [nounc]
 16454      type = local
 16455      nounc = true
 16456  
 16457  And use rclone like this:
 16458  
 16459  rclone copy c:\src nounc:z:\dst
 16460  
 16461  This will use UNC paths on c:\src but not on z:\dst. Of course this will
 16462  cause problems if the absolute path length of a file exceeds 258
 16463  characters on z, so only use this option if you have to.
 16464  
 16465  Symlinks / Junction points
 16466  
 16467  Normally rclone will ignore symlinks or junction points (which behave
 16468  like symlinks under Windows).
 16469  
 16470  If you supply --copy-links or -L then rclone will follow the symlink and
 16471  copy the pointed to file or directory. Note that this flag is
 16472  incompatible with -links / -l.
 16473  
 16474  This flag applies to all commands.
 16475  
 16476  For example, supposing you have a directory structure like this
 16477  
 16478      $ tree /tmp/a
 16479      /tmp/a
 16480      ├── b -> ../b
 16481      ├── expected -> ../expected
 16482      ├── one
 16483      └── two
 16484          └── three
 16485  
 16486  Then you can see the difference with and without the flag like this
 16487  
 16488      $ rclone ls /tmp/a
 16489              6 one
 16490              6 two/three
 16491  
 16492  and
 16493  
 16494      $ rclone -L ls /tmp/a
 16495           4174 expected
 16496              6 one
 16497              6 two/three
 16498              6 b/two
 16499              6 b/one
 16500  
 16501  –links, -l
 16502  
 16503  Normally rclone will ignore symlinks or junction points (which behave
 16504  like symlinks under Windows).
 16505  
 16506  If you supply this flag then rclone will copy symbolic links from the
 16507  local storage, and store them as text files, with a ‘.rclonelink’ suffix
 16508  in the remote storage.
 16509  
 16510  The text file will contain the target of the symbolic link (see
 16511  example).
 16512  
 16513  This flag applies to all commands.
 16514  
 16515  For example, supposing you have a directory structure like this
 16516  
 16517      $ tree /tmp/a
 16518      /tmp/a
 16519      ├── file1 -> ./file4
 16520      └── file2 -> /home/user/file3
 16521  
 16522  Copying the entire directory with ‘-l’
 16523  
 16524      $ rclone copyto -l /tmp/a/file1 remote:/tmp/a/
 16525  
 16526  The remote files are created with a ‘.rclonelink’ suffix
 16527  
 16528      $ rclone ls remote:/tmp/a
 16529             5 file1.rclonelink
 16530            14 file2.rclonelink
 16531  
 16532  The remote files will contain the target of the symbolic links
 16533  
 16534      $ rclone cat remote:/tmp/a/file1.rclonelink
 16535      ./file4
 16536  
 16537      $ rclone cat remote:/tmp/a/file2.rclonelink
 16538      /home/user/file3
 16539  
 16540  Copying them back with ‘-l’
 16541  
 16542      $ rclone copyto -l remote:/tmp/a/ /tmp/b/
 16543  
 16544      $ tree /tmp/b
 16545      /tmp/b
 16546      ├── file1 -> ./file4
 16547      └── file2 -> /home/user/file3
 16548  
 16549  However, if copied back without ‘-l’
 16550  
 16551      $ rclone copyto remote:/tmp/a/ /tmp/b/
 16552  
 16553      $ tree /tmp/b
 16554      /tmp/b
 16555      ├── file1.rclonelink
 16556      └── file2.rclonelink
 16557  
 16558  Note that this flag is incompatible with -copy-links / -L.
 16559  
 16560  Restricting filesystems with –one-file-system
 16561  
 16562  Normally rclone will recurse through filesystems as mounted.
 16563  
 16564  However if you set --one-file-system or -x this tells rclone to stay in
 16565  the filesystem specified by the root and not to recurse into different
 16566  file systems.
 16567  
 16568  For example if you have a directory hierarchy like this
 16569  
 16570      root
 16571      ├── disk1     - disk1 mounted on the root
 16572      │   └── file3 - stored on disk1
 16573      ├── disk2     - disk2 mounted on the root
 16574      │   └── file4 - stored on disk12
 16575      ├── file1     - stored on the root disk
 16576      └── file2     - stored on the root disk
 16577  
 16578  Using rclone --one-file-system copy root remote: will only copy file1
 16579  and file2. Eg
 16580  
 16581      $ rclone -q --one-file-system ls root
 16582              0 file1
 16583              0 file2
 16584  
 16585      $ rclone -q ls root
 16586              0 disk1/file3
 16587              0 disk2/file4
 16588              0 file1
 16589              0 file2
 16590  
 16591  NB Rclone (like most unix tools such as du, rsync and tar) treats a bind
 16592  mount to the same device as being on the same filesystem.
 16593  
 16594  NB This flag is only available on Unix based systems. On systems where
 16595  it isn’t supported (eg Windows) it will be ignored.
 16596  
 16597  Standard Options
 16598  
 16599  Here are the standard options specific to local (Local Disk).
 16600  
 16601  –local-nounc
 16602  
 16603  Disable UNC (long path names) conversion on Windows
 16604  
 16605  -   Config: nounc
 16606  -   Env Var: RCLONE_LOCAL_NOUNC
 16607  -   Type: string
 16608  -   Default: ""
 16609  -   Examples:
 16610      -   “true”
 16611          -   Disables long file names
 16612  
 16613  Advanced Options
 16614  
 16615  Here are the advanced options specific to local (Local Disk).
 16616  
 16617  –copy-links
 16618  
 16619  Follow symlinks and copy the pointed to item.
 16620  
 16621  -   Config: copy_links
 16622  -   Env Var: RCLONE_LOCAL_COPY_LINKS
 16623  -   Type: bool
 16624  -   Default: false
 16625  
 16626  –links
 16627  
 16628  Translate symlinks to/from regular files with a ‘.rclonelink’ extension
 16629  
 16630  -   Config: links
 16631  -   Env Var: RCLONE_LOCAL_LINKS
 16632  -   Type: bool
 16633  -   Default: false
 16634  
 16635  –skip-links
 16636  
 16637  Don’t warn about skipped symlinks. This flag disables warning messages
 16638  on skipped symlinks or junction points, as you explicitly acknowledge
 16639  that they should be skipped.
 16640  
 16641  -   Config: skip_links
 16642  -   Env Var: RCLONE_LOCAL_SKIP_LINKS
 16643  -   Type: bool
 16644  -   Default: false
 16645  
 16646  –local-no-unicode-normalization
 16647  
 16648  Don’t apply unicode normalization to paths and filenames (Deprecated)
 16649  
 16650  This flag is deprecated now. Rclone no longer normalizes unicode file
 16651  names, but it compares them with unicode normalization in the sync
 16652  routine instead.
 16653  
 16654  -   Config: no_unicode_normalization
 16655  -   Env Var: RCLONE_LOCAL_NO_UNICODE_NORMALIZATION
 16656  -   Type: bool
 16657  -   Default: false
 16658  
 16659  –local-no-check-updated
 16660  
 16661  Don’t check to see if the files change during upload
 16662  
 16663  Normally rclone checks the size and modification time of files as they
 16664  are being uploaded and aborts with a message which starts “can’t copy -
 16665  source file is being updated” if the file changes during upload.
 16666  
 16667  However on some file systems this modification time check may fail (eg
 16668  Glusterfs #2206) so this check can be disabled with this flag.
 16669  
 16670  -   Config: no_check_updated
 16671  -   Env Var: RCLONE_LOCAL_NO_CHECK_UPDATED
 16672  -   Type: bool
 16673  -   Default: false
 16674  
 16675  –one-file-system
 16676  
 16677  Don’t cross filesystem boundaries (unix/macOS only).
 16678  
 16679  -   Config: one_file_system
 16680  -   Env Var: RCLONE_LOCAL_ONE_FILE_SYSTEM
 16681  -   Type: bool
 16682  -   Default: false
 16683  
 16684  
 16685  
 16686  CHANGELOG
 16687  
 16688  
 16689  v1.48.0 - 2019-06-15
 16690  
 16691  -   New commands
 16692      -   serve sftp: Serve an rclone remote over SFTP (Nick Craig-Wood)
 16693  -   New Features
 16694      -   Multi threaded downloads to local storage (Nick Craig-Wood)
 16695          -   controlled with --multi-thread-cutoff and
 16696              --multi-thread-streams
 16697      -   Use rclone.conf from rclone executable directory to enable
 16698          portable use (albertony)
 16699      -   Allow sync of a file and a directory with the same name
 16700          (forgems)
 16701          -   this is common on bucket based remotes, eg s3, gcs
 16702      -   Add --ignore-case-sync for forced case insensitivity (garry415)
 16703      -   Implement --stats-one-line-date and --stats-one-line-date-format
 16704          (Peter Berbec)
 16705      -   Log an ERROR for all commands which exit with non-zero status
 16706          (Nick Craig-Wood)
 16707      -   Use go-homedir to read the home directory more reliably (Nick
 16708          Craig-Wood)
 16709      -   Enable creating encrypted config through external script
 16710          invocation (Wojciech Smigielski)
 16711      -   build: Drop support for go1.8 (Nick Craig-Wood)
 16712      -   config: Make config create/update encrypt passwords where
 16713          necessary (Nick Craig-Wood)
 16714      -   copyurl: Honor --no-check-certificate (Stefan Breunig)
 16715      -   install: Linux skip man pages if no mandb (didil)
 16716      -   lsf: Support showing the Tier of the object (Nick Craig-Wood)
 16717      -   lsjson
 16718          -   Added EncryptedPath to output (calisro)
 16719          -   Support showing the Tier of the object (Nick Craig-Wood)
 16720          -   Add IsBucket field for bucket based remote listing of the
 16721              root (Nick Craig-Wood)
 16722      -   rc
 16723          -   Add --loopback flag to run commands directly without a
 16724              server (Nick Craig-Wood)
 16725          -   Add operations/fsinfo: Return information about the remote
 16726              (Nick Craig-Wood)
 16727          -   Skip auth for OPTIONS request (Nick Craig-Wood)
 16728          -   cmd/providers: Add DefaultStr, ValueStr and Type fields
 16729              (Nick Craig-Wood)
 16730          -   jobs: Make job expiry timeouts configurable (Aleksandar
 16731              Jankovic)
 16732      -   serve dlna reworked and improved (Dan Walters)
 16733      -   serve ftp: add --ftp-public-ip flag to specify public IP
 16734          (calistri)
 16735      -   serve restic: Add support for --private-repos in serve restic
 16736          (Florian Apolloner)
 16737      -   serve webdav: Combine serve webdav and serve http (Gary Kim)
 16738      -   size: Ignore negative sizes when calculating total (Garry
 16739          McNulty)
 16740  -   Bug Fixes
 16741      -   Make move and copy individual files obey --backup-dir (Nick
 16742          Craig-Wood)
 16743      -   If --ignore-checksum is in effect, don’t calculate checksum
 16744          (Nick Craig-Wood)
 16745      -   moveto: Fix case-insensitive same remote move (Gary Kim)
 16746      -   rc: Fix serving bucket based objects with --rc-serve (Nick
 16747          Craig-Wood)
 16748      -   serve webdav: Fix serveDir not being updated with changes from
 16749          webdav (Gary Kim)
 16750  -   Mount
 16751      -   Fix poll interval documentation (Animosity022)
 16752  -   VFS
 16753      -   Make WriteAt for non cached files work with non-sequential
 16754          writes (Nick Craig-Wood)
 16755  -   Local
 16756      -   Only calculate the required hashes for big speedup (Nick
 16757          Craig-Wood)
 16758      -   Log errors when listing instead of returning an error (Nick
 16759          Craig-Wood)
 16760      -   Fix preallocate warning on Linux with ZFS (Nick Craig-Wood)
 16761  -   Crypt
 16762      -   Make rclone dedupe work through crypt (Nick Craig-Wood)
 16763      -   Fix wrapping of ChangeNotify to decrypt directories properly
 16764          (Nick Craig-Wood)
 16765      -   Support PublicLink (rclone link) of underlying backend (Nick
 16766          Craig-Wood)
 16767      -   Implement Optional methods SetTier, GetTier (Nick Craig-Wood)
 16768  -   B2
 16769      -   Implement server side copy (Nick Craig-Wood)
 16770      -   Implement SetModTime (Nick Craig-Wood)
 16771  -   Drive
 16772      -   Fix move and copy from TeamDrive to GDrive (Fionera)
 16773      -   Add notes that cleanup works in the background on drive (Nick
 16774          Craig-Wood)
 16775      -   Add --drive-server-side-across-configs to default back to old
 16776          server side copy semantics by default (Nick Craig-Wood)
 16777      -   Add --drive-size-as-quota to show storage quota usage for file
 16778          size (Garry McNulty)
 16779  -   FTP
 16780      -   Add FTP List timeout (Jeff Quinn)
 16781      -   Add FTP over TLS support (Gary Kim)
 16782      -   Add --ftp-no-check-certificate option for FTPS (Gary Kim)
 16783  -   Google Cloud Storage
 16784      -   Fix upload errors when uploading pre 1970 files (Nick
 16785          Craig-Wood)
 16786  -   Jottacloud
 16787      -   Add support for selecting device and mountpoint. (buengese)
 16788  -   Mega
 16789      -   Add cleanup support (Gary Kim)
 16790  -   Onedrive
 16791      -   More accurately check if root is found (Cnly)
 16792  -   S3
 16793      -   Suppport S3 Accelerated endpoints with
 16794          --s3-use-accelerate-endpoint (Nick Craig-Wood)
 16795      -   Add config info for Wasabi’s EU Central endpoint (Robert Marko)
 16796      -   Make SetModTime work for GLACIER while syncing (Philip Harvey)
 16797  -   SFTP
 16798      -   Add About support (Gary Kim)
 16799      -   Fix about parsing of df results so it can cope with -ve results
 16800          (Nick Craig-Wood)
 16801      -   Send custom client version and debug server version (Nick
 16802          Craig-Wood)
 16803  -   WebDAV
 16804      -   Retry on 423 Locked errors (Nick Craig-Wood)
 16805  
 16806  
 16807  v1.47.0 - 2019-04-13
 16808  
 16809  -   New backends
 16810      -   Backend for Koofr cloud storage service. (jaKa)
 16811  -   New Features
 16812      -   Resume downloads if the reader fails in copy (Nick Craig-Wood)
 16813          -   this means rclone will restart transfers if the source has
 16814              an error
 16815          -   this is most useful for downloads or cloud to cloud copies
 16816      -   Use --fast-list for listing operations where it won’t use more
 16817          memory (Nick Craig-Wood)
 16818          -   this should speed up the following operations on remotes
 16819              which support ListR
 16820          -   dedupe, serve restic lsf, ls, lsl, lsjson, lsd, md5sum,
 16821              sha1sum, hashsum, size, delete, cat, settier
 16822          -   use --disable ListR to get old behaviour if required
 16823      -   Make --files-from traverse the destination unless --no-traverse
 16824          is set (Nick Craig-Wood)
 16825          -   this fixes --files-from with Google drive and excessive API
 16826              use in general.
 16827      -   Make server side copy account bytes and obey --max-transfer
 16828          (Nick Craig-Wood)
 16829      -   Add --create-empty-src-dirs flag and default to not creating
 16830          empty dirs (ishuah)
 16831      -   Add client side TLS/SSL flags
 16832          --ca-cert/--client-cert/--client-key (Nick Craig-Wood)
 16833      -   Implement --suffix-keep-extension for use with --suffix (Nick
 16834          Craig-Wood)
 16835      -   build:
 16836          -   Switch to semvar compliant version tags to be go modules
 16837              compliant (Nick Craig-Wood)
 16838          -   Update to use go1.12.x for the build (Nick Craig-Wood)
 16839      -   serve dlna: Add connection manager service description to
 16840          improve compatibility (Dan Walters)
 16841      -   lsf: Add ‘e’ format to show encrypted names and ‘o’ for original
 16842          IDs (Nick Craig-Wood)
 16843      -   lsjson: Added --files-only and --dirs-only flags (calistri)
 16844      -   rc: Implement operations/publiclink the equivalent of
 16845          rclone link (Nick Craig-Wood)
 16846  -   Bug Fixes
 16847      -   accounting: Fix total ETA when --stats-unit bits is in effect
 16848          (Nick Craig-Wood)
 16849      -   Bash TAB completion
 16850          -   Use private custom func to fix clash between rclone and
 16851              kubectl (Nick Craig-Wood)
 16852          -   Fix for remotes with underscores in their names (Six)
 16853          -   Fix completion of remotes (Florian Gamböck)
 16854          -   Fix autocompletion of remote paths with spaces (Danil
 16855              Semelenov)
 16856      -   serve dlna: Fix root XML service descriptor (Dan Walters)
 16857      -   ncdu: Fix display corruption with Chinese characters (Nick
 16858          Craig-Wood)
 16859      -   Add SIGTERM to signals which run the exit handlers on unix (Nick
 16860          Craig-Wood)
 16861      -   rc: Reload filter when the options are set via the rc (Nick
 16862          Craig-Wood)
 16863  -   VFS / Mount
 16864      -   Fix FreeBSD: Ignore Truncate if called with no readers and
 16865          already the correct size (Nick Craig-Wood)
 16866      -   Read directory and check for a file before mkdir (Nick
 16867          Craig-Wood)
 16868      -   Shorten the locking window for vfs/refresh (Nick Craig-Wood)
 16869  -   Azure Blob
 16870      -   Enable MD5 checksums when uploading files bigger than the
 16871          “Cutoff” (Dr.Rx)
 16872      -   Fix SAS URL support (Nick Craig-Wood)
 16873  -   B2
 16874      -   Allow manual configuration of backblaze downloadUrl (Vince)
 16875      -   Ignore already_hidden error on remove (Nick Craig-Wood)
 16876      -   Ignore malformed src_last_modified_millis (Nick Craig-Wood)
 16877  -   Drive
 16878      -   Add --skip-checksum-gphotos to ignore incorrect checksums on
 16879          Google Photos (Nick Craig-Wood)
 16880      -   Allow server side move/copy between different remotes. (Fionera)
 16881      -   Add docs on team drives and --fast-list eventual consistency
 16882          (Nestar47)
 16883      -   Fix imports of text files (Nick Craig-Wood)
 16884      -   Fix range requests on 0 length files (Nick Craig-Wood)
 16885      -   Fix creation of duplicates with server side copy (Nick
 16886          Craig-Wood)
 16887  -   Dropbox
 16888      -   Retry blank errors to fix long listings (Nick Craig-Wood)
 16889  -   FTP
 16890      -   Add --ftp-concurrency to limit maximum number of connections
 16891          (Nick Craig-Wood)
 16892  -   Google Cloud Storage
 16893      -   Fall back to default application credentials (marcintustin)
 16894      -   Allow bucket policy only buckets (Nick Craig-Wood)
 16895  -   HTTP
 16896      -   Add --http-no-slash for websites with directories with no
 16897          slashes (Nick Craig-Wood)
 16898      -   Remove duplicates from listings (Nick Craig-Wood)
 16899      -   Fix socket leak on 404 errors (Nick Craig-Wood)
 16900  -   Jottacloud
 16901      -   Fix token refresh (Sebastian Bünger)
 16902      -   Add device registration (Oliver Heyme)
 16903  -   Onedrive
 16904      -   Implement graceful cancel of multipart uploads if rclone is
 16905          interrupted (Cnly)
 16906      -   Always add trailing colon to path when addressing items, (Cnly)
 16907      -   Return errors instead of panic for invalid uploads (Fabian
 16908          Möller)
 16909  -   S3
 16910      -   Add support for “Glacier Deep Archive” storage class (Manu)
 16911      -   Update Dreamhost endpoint (Nick Craig-Wood)
 16912      -   Note incompatibility with CEPH Jewel (Nick Craig-Wood)
 16913  -   SFTP
 16914      -   Allow custom ssh client config (Alexandru Bumbacea)
 16915  -   Swift
 16916      -   Obey Retry-After to enable OVH restore from cold storage (Nick
 16917          Craig-Wood)
 16918      -   Work around token expiry on CEPH (Nick Craig-Wood)
 16919  -   WebDAV
 16920      -   Allow IsCollection property to be integer or boolean (Nick
 16921          Craig-Wood)
 16922      -   Fix race when creating directories (Nick Craig-Wood)
 16923      -   Fix About/df when reading the available/total returns 0 (Nick
 16924          Craig-Wood)
 16925  
 16926  
 16927  v1.46 - 2019-02-09
 16928  
 16929  -   New backends
 16930      -   Support Alibaba Cloud (Aliyun) OSS via the s3 backend (Nick
 16931          Craig-Wood)
 16932  -   New commands
 16933      -   serve dlna: serves a remove via DLNA for the local network
 16934          (nicolov)
 16935  -   New Features
 16936      -   copy, move: Restore deprecated --no-traverse flag (Nick
 16937          Craig-Wood)
 16938          -   This is useful for when transferring a small number of files
 16939              into a large destination
 16940      -   genautocomplete: Add remote path completion for bash completion
 16941          (Christopher Peterson & Danil Semelenov)
 16942      -   Buffer memory handling reworked to return memory to the OS
 16943          better (Nick Craig-Wood)
 16944          -   Buffer recycling library to replace sync.Pool
 16945          -   Optionally use memory mapped memory for better memory
 16946              shrinking
 16947          -   Enable with --use-mmap if having memory problems - not
 16948              default yet
 16949      -   Parallelise reading of files specified by --files-from (Nick
 16950          Craig-Wood)
 16951      -   check: Add stats showing total files matched. (Dario Guzik)
 16952      -   Allow rename/delete open files under Windows (Nick Craig-Wood)
 16953      -   lsjson: Use exactly the correct number of decimal places in the
 16954          seconds (Nick Craig-Wood)
 16955      -   Add cookie support with cmdline switch --use-cookies for all
 16956          HTTP based remotes (qip)
 16957      -   Warn if --checksum is set but there are no hashes available
 16958          (Nick Craig-Wood)
 16959      -   Rework rate limiting (pacer) to be more accurate and allow
 16960          bursting (Nick Craig-Wood)
 16961      -   Improve error reporting for too many/few arguments in commands
 16962          (Nick Craig-Wood)
 16963      -   listremotes: Remove -l short flag as it conflicts with the new
 16964          global flag (weetmuts)
 16965      -   Make http serving with auth generate INFO messages on auth fail
 16966          (Nick Craig-Wood)
 16967  -   Bug Fixes
 16968      -   Fix layout of stats (Nick Craig-Wood)
 16969      -   Fix --progress crash under Windows Jenkins (Nick Craig-Wood)
 16970      -   Fix transfer of google/onedrive docs by calling Rcat in Copy
 16971          when size is -1 (Cnly)
 16972      -   copyurl: Fix checking of --dry-run (Denis Skovpen)
 16973  -   Mount
 16974      -   Check that mountpoint and local directory to mount don’t overlap
 16975          (Nick Craig-Wood)
 16976      -   Fix mount size under 32 bit Windows (Nick Craig-Wood)
 16977  -   VFS
 16978      -   Implement renaming of directories for backends without DirMove
 16979          (Nick Craig-Wood)
 16980          -   now all backends except b2 support renaming directories
 16981      -   Implement --vfs-cache-max-size to limit the total size of the
 16982          cache (Nick Craig-Wood)
 16983      -   Add --dir-perms and --file-perms flags to set default
 16984          permissions (Nick Craig-Wood)
 16985      -   Fix deadlock on concurrent operations on a directory (Nick
 16986          Craig-Wood)
 16987      -   Fix deadlock between RWFileHandle.close and File.Remove (Nick
 16988          Craig-Wood)
 16989      -   Fix renaming/deleting open files with cache mode “writes” under
 16990          Windows (Nick Craig-Wood)
 16991      -   Fix panic on rename with --dry-run set (Nick Craig-Wood)
 16992      -   Fix vfs/refresh with recurse=true needing the --fast-list flag
 16993  -   Local
 16994      -   Add support for -l/--links (symbolic link translation)
 16995          (yair@unicorn)
 16996          -   this works by showing links as link.rclonelink - see local
 16997              backend docs for more info
 16998          -   this errors if used with -L/--copy-links
 16999      -   Fix renaming/deleting open files on Windows (Nick Craig-Wood)
 17000  -   Crypt
 17001      -   Check for maximum length before decrypting filename to fix panic
 17002          (Garry McNulty)
 17003  -   Azure Blob
 17004      -   Allow building azureblob backend on *BSD (themylogin)
 17005      -   Use the rclone HTTP client to support --dump headers, --tpslimit
 17006          etc (Nick Craig-Wood)
 17007      -   Use the s3 pacer for 0 delay in non error conditions (Nick
 17008          Craig-Wood)
 17009      -   Ignore directory markers (Nick Craig-Wood)
 17010      -   Stop Mkdir attempting to create existing containers (Nick
 17011          Craig-Wood)
 17012  -   B2
 17013      -   cleanup: will remove unfinished large files >24hrs old (Garry
 17014          McNulty)
 17015      -   For a bucket limited application key check the bucket name (Nick
 17016          Craig-Wood)
 17017          -   before this, rclone would use the authorised bucket
 17018              regardless of what you put on the command line
 17019      -   Added --b2-disable-checksum flag (Wojciech Smigielski)
 17020          -   this enables large files to be uploaded without a SHA-1 hash
 17021              for speed reasons
 17022  -   Drive
 17023      -   Set default pacer to 100ms for 10 tps (Nick Craig-Wood)
 17024          -   This fits the Google defaults much better and reduces the
 17025              403 errors massively
 17026          -   Add --drive-pacer-min-sleep and --drive-pacer-burst to
 17027              control the pacer
 17028      -   Improve ChangeNotify support for items with multiple parents
 17029          (Fabian Möller)
 17030      -   Fix ListR for items with multiple parents - this fixes oddities
 17031          with vfs/refresh (Fabian Möller)
 17032      -   Fix using --drive-impersonate and appfolders (Nick Craig-Wood)
 17033      -   Fix google docs in rclone mount for some (not all) applications
 17034          (Nick Craig-Wood)
 17035  -   Dropbox
 17036      -   Retry-After support for Dropbox backend (Mathieu Carbou)
 17037  -   FTP
 17038      -   Wait for 60 seconds for a connection to Close then declare it
 17039          dead (Nick Craig-Wood)
 17040          -   helps with indefinite hangs on some FTP servers
 17041  -   Google Cloud Storage
 17042      -   Update google cloud storage endpoints (weetmuts)
 17043  -   HTTP
 17044      -   Add an example with username and password which is supported but
 17045          wasn’t documented (Nick Craig-Wood)
 17046      -   Fix backend with --files-from and non-existent files (Nick
 17047          Craig-Wood)
 17048  -   Hubic
 17049      -   Make error message more informative if authentication fails
 17050          (Nick Craig-Wood)
 17051  -   Jottacloud
 17052      -   Resume and deduplication support (Oliver Heyme)
 17053      -   Use token auth for all API requests Don’t store password anymore
 17054          (Sebastian Bünger)
 17055      -   Add support for 2-factor authentification (Sebastian Bünger)
 17056  -   Mega
 17057      -   Implement v2 account login which fixes logins for newer Mega
 17058          accounts (Nick Craig-Wood)
 17059      -   Return error if an unknown length file is attempted to be
 17060          uploaded (Nick Craig-Wood)
 17061      -   Add new error codes for better error reporting (Nick Craig-Wood)
 17062  -   Onedrive
 17063      -   Fix broken support for “shared with me” folders (Alex Chen)
 17064      -   Fix root ID not normalised (Cnly)
 17065      -   Return err instead of panic on unknown-sized uploads (Cnly)
 17066  -   Qingstor
 17067      -   Fix go routine leak on multipart upload errors (Nick Craig-Wood)
 17068      -   Add upload chunk size/concurrency/cutoff control (Nick
 17069          Craig-Wood)
 17070      -   Default --qingstor-upload-concurrency to 1 to work around bug
 17071          (Nick Craig-Wood)
 17072  -   S3
 17073      -   Implement --s3-upload-cutoff for single part uploads below this
 17074          (Nick Craig-Wood)
 17075      -   Change --s3-upload-concurrency default to 4 to increase
 17076          perfomance (Nick Craig-Wood)
 17077      -   Add --s3-bucket-acl to control bucket ACL (Nick Craig-Wood)
 17078      -   Auto detect region for buckets on operation failure (Nick
 17079          Craig-Wood)
 17080      -   Add GLACIER storage class (William Cocker)
 17081      -   Add Scaleway to s3 documentation (Rémy Léone)
 17082      -   Add AWS endpoint eu-north-1 (weetmuts)
 17083  -   SFTP
 17084      -   Add support for PEM encrypted private keys (Fabian Möller)
 17085      -   Add option to force the usage of an ssh-agent (Fabian Möller)
 17086      -   Perform environment variable expansion on key-file (Fabian
 17087          Möller)
 17088      -   Fix rmdir on Windows based servers (eg CrushFTP) (Nick
 17089          Craig-Wood)
 17090      -   Fix rmdir deleting directory contents on some SFTP servers (Nick
 17091          Craig-Wood)
 17092      -   Fix error on dangling symlinks (Nick Craig-Wood)
 17093  -   Swift
 17094      -   Add --swift-no-chunk to disable segmented uploads in rcat/mount
 17095          (Nick Craig-Wood)
 17096      -   Introduce application credential auth support (kayrus)
 17097      -   Fix memory usage by slimming Object (Nick Craig-Wood)
 17098      -   Fix extra requests on upload (Nick Craig-Wood)
 17099      -   Fix reauth on big files (Nick Craig-Wood)
 17100  -   Union
 17101      -   Fix poll-interval not working (Nick Craig-Wood)
 17102  -   WebDAV
 17103      -   Support About which means rclone mount will show the correct
 17104          disk size (Nick Craig-Wood)
 17105      -   Support MD5 and SHA1 hashes with Owncloud and Nextcloud (Nick
 17106          Craig-Wood)
 17107      -   Fail soft on time parsing errors (Nick Craig-Wood)
 17108      -   Fix infinite loop on failed directory creation (Nick Craig-Wood)
 17109      -   Fix identification of directories for Bitrix Site Manager (Nick
 17110          Craig-Wood)
 17111      -   Fix upload of 0 length files on some servers (Nick Craig-Wood)
 17112      -   Fix if MKCOL fails with 423 Locked assume the directory exists
 17113          (Nick Craig-Wood)
 17114  
 17115  
 17116  v1.45 - 2018-11-24
 17117  
 17118  -   New backends
 17119      -   The Yandex backend was re-written - see below for details
 17120          (Sebastian Bünger)
 17121  -   New commands
 17122      -   rcd: New command just to serve the remote control API (Nick
 17123          Craig-Wood)
 17124  -   New Features
 17125      -   The remote control API (rc) was greatly expanded to allow full
 17126          control over rclone (Nick Craig-Wood)
 17127          -   sensitive operations require authorization or the
 17128              --rc-no-auth flag
 17129          -   config/* operations to configure rclone
 17130          -   options/* for reading/setting command line flags
 17131          -   operations/* for all low level operations, eg copy file,
 17132              list directory
 17133          -   sync/* for sync, copy and move
 17134          -   --rc-files flag to serve files on the rc http server
 17135              -   this is for building web native GUIs for rclone
 17136          -   Optionally serving objects on the rc http server
 17137          -   Ensure rclone fails to start up if the --rc port is in use
 17138              already
 17139          -   See the rc docs for more info
 17140      -   sync/copy/move
 17141          -   Make --files-from only read the objects specified and don’t
 17142              scan directories (Nick Craig-Wood)
 17143              -   This is a huge speed improvement for destinations with
 17144                  lots of files
 17145      -   filter: Add --ignore-case flag (Nick Craig-Wood)
 17146      -   ncdu: Add remove function (‘d’ key) (Henning Surmeier)
 17147      -   rc command
 17148          -   Add --json flag for structured JSON input (Nick Craig-Wood)
 17149          -   Add --user and --pass flags and interpret --rc-user,
 17150              --rc-pass, --rc-addr (Nick Craig-Wood)
 17151      -   build
 17152          -   Require go1.8 or later for compilation (Nick Craig-Wood)
 17153          -   Enable softfloat on MIPS arch (Scott Edlund)
 17154          -   Integration test framework revamped with a better report and
 17155              better retries (Nick Craig-Wood)
 17156  -   Bug Fixes
 17157      -   cmd: Make –progress update the stats correctly at the end (Nick
 17158          Craig-Wood)
 17159      -   config: Create config directory on save if it is missing (Nick
 17160          Craig-Wood)
 17161      -   dedupe: Check for existing filename before renaming a dupe file
 17162          (ssaqua)
 17163      -   move: Don’t create directories with –dry-run (Nick Craig-Wood)
 17164      -   operations: Fix Purge and Rmdirs when dir is not the root (Nick
 17165          Craig-Wood)
 17166      -   serve http/webdav/restic: Ensure rclone exits if the port is in
 17167          use (Nick Craig-Wood)
 17168  -   Mount
 17169      -   Make --volname work for Windows and macOS (Nick Craig-Wood)
 17170  -   Azure Blob
 17171      -   Avoid context deadline exceeded error by setting a large
 17172          TryTimeout value (brused27)
 17173      -   Fix erroneous Rmdir error “directory not empty” (Nick
 17174          Craig-Wood)
 17175      -   Wait for up to 60s to create a just deleted container (Nick
 17176          Craig-Wood)
 17177  -   Dropbox
 17178      -   Add dropbox impersonate support (Jake Coggiano)
 17179  -   Jottacloud
 17180      -   Fix bug in --fast-list handing of empty folders (albertony)
 17181  -   Opendrive
 17182      -   Fix transfer of files with + and & in (Nick Craig-Wood)
 17183      -   Fix retries of upload chunks (Nick Craig-Wood)
 17184  -   S3
 17185      -   Set ACL for server side copies to that provided by the user
 17186          (Nick Craig-Wood)
 17187      -   Fix role_arn, credential_source, … (Erik Swanson)
 17188      -   Add config info for Wasabi’s US-West endpoint (Henry Ptasinski)
 17189  -   SFTP
 17190      -   Ensure file hash checking is really disabled (Jon Fautley)
 17191  -   Swift
 17192      -   Add pacer for retries to make swift more reliable (Nick
 17193          Craig-Wood)
 17194  -   WebDAV
 17195      -   Add Content-Type to PUT requests (Nick Craig-Wood)
 17196      -   Fix config parsing so --webdav-user and --webdav-pass flags work
 17197          (Nick Craig-Wood)
 17198      -   Add RFC3339 date format (Ralf Hemberger)
 17199  -   Yandex
 17200      -   The yandex backend was re-written (Sebastian Bünger)
 17201          -   This implements low level retries (Sebastian Bünger)
 17202          -   Copy, Move, DirMove, PublicLink and About optional
 17203              interfaces (Sebastian Bünger)
 17204          -   Improved general error handling (Sebastian Bünger)
 17205          -   Removed ListR for now due to inconsistent behaviour
 17206              (Sebastian Bünger)
 17207  
 17208  
 17209  v1.44 - 2018-10-15
 17210  
 17211  -   New commands
 17212      -   serve ftp: Add ftp server (Antoine GIRARD)
 17213      -   settier: perform storage tier changes on supported remotes
 17214          (sandeepkru)
 17215  -   New Features
 17216      -   Reworked command line help
 17217          -   Make default help less verbose (Nick Craig-Wood)
 17218          -   Split flags up into global and backend flags (Nick
 17219              Craig-Wood)
 17220          -   Implement specialised help for flags and backends (Nick
 17221              Craig-Wood)
 17222          -   Show URL of backend help page when starting config (Nick
 17223              Craig-Wood)
 17224      -   stats: Long names now split in center (Joanna Marek)
 17225      -   Add –log-format flag for more control over log output (dcpu)
 17226      -   rc: Add support for OPTIONS and basic CORS (frenos)
 17227      -   stats: show FatalErrors and NoRetryErrors in stats (Cédric
 17228          Connes)
 17229  -   Bug Fixes
 17230      -   Fix -P not ending with a new line (Nick Craig-Wood)
 17231      -   config: don’t create default config dir when user supplies
 17232          –config (albertony)
 17233      -   Don’t print non-ASCII characters with –progress on windows (Nick
 17234          Craig-Wood)
 17235      -   Correct logs for excluded items (ssaqua)
 17236  -   Mount
 17237      -   Remove EXPERIMENTAL tags (Nick Craig-Wood)
 17238  -   VFS
 17239      -   Fix race condition detected by serve ftp tests (Nick Craig-Wood)
 17240      -   Add vfs/poll-interval rc command (Fabian Möller)
 17241      -   Enable rename for nearly all remotes using server side Move or
 17242          Copy (Nick Craig-Wood)
 17243      -   Reduce directory cache cleared by poll-interval (Fabian Möller)
 17244      -   Remove EXPERIMENTAL tags (Nick Craig-Wood)
 17245  -   Local
 17246      -   Skip bad symlinks in dir listing with -L enabled (Cédric Connes)
 17247      -   Preallocate files on Windows to reduce fragmentation (Nick
 17248          Craig-Wood)
 17249      -   Preallocate files on linux with fallocate(2) (Nick Craig-Wood)
 17250  -   Cache
 17251      -   Add cache/fetch rc function (Fabian Möller)
 17252      -   Fix worker scale down (Fabian Möller)
 17253      -   Improve performance by not sending info requests for cached
 17254          chunks (dcpu)
 17255      -   Fix error return value of cache/fetch rc method (Fabian Möller)
 17256      -   Documentation fix for cache-chunk-total-size (Anagh Kumar
 17257          Baranwal)
 17258      -   Preserve leading / in wrapped remote path (Fabian Möller)
 17259      -   Add plex_insecure option to skip certificate validation (Fabian
 17260          Möller)
 17261      -   Remove entries that no longer exist in the source (dcpu)
 17262  -   Crypt
 17263      -   Preserve leading / in wrapped remote path (Fabian Möller)
 17264  -   Alias
 17265      -   Fix handling of Windows network paths (Nick Craig-Wood)
 17266  -   Azure Blob
 17267      -   Add –azureblob-list-chunk parameter (Santiago Rodríguez)
 17268      -   Implemented settier command support on azureblob remote.
 17269          (sandeepkru)
 17270      -   Work around SDK bug which causes errors for chunk-sized files
 17271          (Nick Craig-Wood)
 17272  -   Box
 17273      -   Implement link sharing. (Sebastian Bünger)
 17274  -   Drive
 17275      -   Add –drive-import-formats - google docs can now be imported
 17276          (Fabian Möller)
 17277          -   Rewrite mime type and extension handling (Fabian Möller)
 17278          -   Add document links (Fabian Möller)
 17279          -   Add support for multipart document extensions (Fabian
 17280              Möller)
 17281          -   Add support for apps-script to json export (Fabian Möller)
 17282          -   Fix escaped chars in documents during list (Fabian Möller)
 17283      -   Add –drive-v2-download-min-size a workaround for slow downloads
 17284          (Fabian Möller)
 17285      -   Improve directory notifications in ChangeNotify (Fabian Möller)
 17286      -   When listing team drives in config, continue on failure (Nick
 17287          Craig-Wood)
 17288  -   FTP
 17289      -   Add a small pause after failed upload before deleting file (Nick
 17290          Craig-Wood)
 17291  -   Google Cloud Storage
 17292      -   Fix service_account_file being ignored (Fabian Möller)
 17293  -   Jottacloud
 17294      -   Minor improvement in quota info (omit if unlimited) (albertony)
 17295      -   Add –fast-list support (albertony)
 17296      -   Add permanent delete support: –jottacloud-hard-delete
 17297          (albertony)
 17298      -   Add link sharing support (albertony)
 17299      -   Fix handling of reserved characters. (Sebastian Bünger)
 17300      -   Fix socket leak on Object.Remove (Nick Craig-Wood)
 17301  -   Onedrive
 17302      -   Rework to support Microsoft Graph (Cnly)
 17303          -   NB this will require re-authenticating the remote
 17304      -   Removed upload cutoff and always do session uploads (Oliver
 17305          Heyme)
 17306      -   Use single-part upload for empty files (Cnly)
 17307      -   Fix new fields not saved when editing old config (Alex Chen)
 17308      -   Fix sometimes special chars in filenames not replaced (Alex
 17309          Chen)
 17310      -   Ignore OneNote files by default (Alex Chen)
 17311      -   Add link sharing support (jackyzy823)
 17312  -   S3
 17313      -   Use custom pacer, to retry operations when reasonable (Craig
 17314          Miskell)
 17315      -   Use configured server-side-encryption and storace class options
 17316          when calling CopyObject() (Paul Kohout)
 17317      -   Make –s3-v2-auth flag (Nick Craig-Wood)
 17318      -   Fix v2 auth on files with spaces (Nick Craig-Wood)
 17319  -   Union
 17320      -   Implement union backend which reads from multiple backends
 17321          (Felix Brucker)
 17322      -   Implement optional interfaces (Move, DirMove, Copy etc) (Nick
 17323          Craig-Wood)
 17324      -   Fix ChangeNotify to support multiple remotes (Fabian Möller)
 17325      -   Fix –backup-dir on union backend (Nick Craig-Wood)
 17326  -   WebDAV
 17327      -   Add another time format (Nick Craig-Wood)
 17328      -   Add a small pause after failed upload before deleting file (Nick
 17329          Craig-Wood)
 17330      -   Add workaround for missing mtime (buergi)
 17331      -   Sharepoint: Renew cookies after 12hrs (Henning Surmeier)
 17332  -   Yandex
 17333      -   Remove redundant nil checks (teresy)
 17334  
 17335  
 17336  v1.43.1 - 2018-09-07
 17337  
 17338  Point release to fix hubic and azureblob backends.
 17339  
 17340  -   Bug Fixes
 17341      -   ncdu: Return error instead of log.Fatal in Show (Fabian Möller)
 17342      -   cmd: Fix crash with –progress and –stats 0 (Nick Craig-Wood)
 17343      -   docs: Tidy website display (Anagh Kumar Baranwal)
 17344  -   Azure Blob:
 17345      -   Fix multi-part uploads. (sandeepkru)
 17346  -   Hubic
 17347      -   Fix uploads (Nick Craig-Wood)
 17348      -   Retry auth fetching if it fails to make hubic more reliable
 17349          (Nick Craig-Wood)
 17350  
 17351  
 17352  v1.43 - 2018-09-01
 17353  
 17354  -   New backends
 17355      -   Jottacloud (Sebastian Bünger)
 17356  -   New commands
 17357      -   copyurl: copies a URL to a remote (Denis)
 17358  -   New Features
 17359      -   Reworked config for backends (Nick Craig-Wood)
 17360          -   All backend config can now be supplied by command line, env
 17361              var or config file
 17362          -   Advanced section in the config wizard for the optional items
 17363          -   A large step towards rclone backends being usable in other
 17364              go software
 17365          -   Allow on the fly remotes with :backend: syntax
 17366      -   Stats revamp
 17367          -   Add --progress/-P flag to show interactive progress (Nick
 17368              Craig-Wood)
 17369          -   Show the total progress of the sync in the stats (Nick
 17370              Craig-Wood)
 17371          -   Add --stats-one-line flag for single line stats (Nick
 17372              Craig-Wood)
 17373      -   Added weekday schedule into --bwlimit (Mateusz)
 17374      -   lsjson: Add option to show the original object IDs (Fabian
 17375          Möller)
 17376      -   serve webdav: Make Content-Type without reading the file and add
 17377          --etag-hash (Nick Craig-Wood)
 17378      -   build
 17379          -   Build macOS with native compiler (Nick Craig-Wood)
 17380          -   Update to use go1.11 for the build (Nick Craig-Wood)
 17381      -   rc
 17382          -   Added core/stats to return the stats (reddi1)
 17383      -   version --check: Prints the current release and beta versions
 17384          (Nick Craig-Wood)
 17385  -   Bug Fixes
 17386      -   accounting
 17387          -   Fix time to completion estimates (Nick Craig-Wood)
 17388          -   Fix moving average speed for file stats (Nick Craig-Wood)
 17389      -   config: Fix error reading password from piped input (Nick
 17390          Craig-Wood)
 17391      -   move: Fix --delete-empty-src-dirs flag to delete all empty dirs
 17392          on move (ishuah)
 17393  -   Mount
 17394      -   Implement --daemon-timeout flag for OSXFUSE (Nick Craig-Wood)
 17395      -   Fix mount --daemon not working with encrypted config (Alex Chen)
 17396      -   Clip the number of blocks to 2^32-1 on macOS - fixes borg backup
 17397          (Nick Craig-Wood)
 17398  -   VFS
 17399      -   Enable vfs-read-chunk-size by default (Fabian Möller)
 17400      -   Add the vfs/refresh rc command (Fabian Möller)
 17401      -   Add non recursive mode to vfs/refresh rc command (Fabian Möller)
 17402      -   Try to seek buffer on read only files (Fabian Möller)
 17403  -   Local
 17404      -   Fix crash when deprecated --local-no-unicode-normalization is
 17405          supplied (Nick Craig-Wood)
 17406      -   Fix mkdir error when trying to copy files to the root of a drive
 17407          on windows (Nick Craig-Wood)
 17408  -   Cache
 17409      -   Fix nil pointer deref when using lsjson on cached directory
 17410          (Nick Craig-Wood)
 17411      -   Fix nil pointer deref for occasional crash on playback (Nick
 17412          Craig-Wood)
 17413  -   Crypt
 17414      -   Fix accounting when checking hashes on upload (Nick Craig-Wood)
 17415  -   Amazon Cloud Drive
 17416      -   Make very clear in the docs that rclone has no ACD keys (Nick
 17417          Craig-Wood)
 17418  -   Azure Blob
 17419      -   Add connection string and SAS URL auth (Nick Craig-Wood)
 17420      -   List the container to see if it exists (Nick Craig-Wood)
 17421      -   Port new Azure Blob Storage SDK (sandeepkru)
 17422      -   Added blob tier, tier between Hot, Cool and Archive.
 17423          (sandeepkru)
 17424      -   Remove leading / from paths (Nick Craig-Wood)
 17425  -   B2
 17426      -   Support Application Keys (Nick Craig-Wood)
 17427      -   Remove leading / from paths (Nick Craig-Wood)
 17428  -   Box
 17429      -   Fix upload of > 2GB files on 32 bit platforms (Nick Craig-Wood)
 17430      -   Make --box-commit-retries flag defaulting to 100 to fix large
 17431          uploads (Nick Craig-Wood)
 17432  -   Drive
 17433      -   Add --drive-keep-revision-forever flag (lewapm)
 17434      -   Handle gdocs when filtering file names in list (Fabian Möller)
 17435      -   Support using --fast-list for large speedups (Fabian Möller)
 17436  -   FTP
 17437      -   Fix Put mkParentDir failed: 521 for BunnyCDN (Nick Craig-Wood)
 17438  -   Google Cloud Storage
 17439      -   Fix index out of range error with --fast-list (Nick Craig-Wood)
 17440  -   Jottacloud
 17441      -   Fix MD5 error check (Oliver Heyme)
 17442      -   Handle empty time values (Martin Polden)
 17443      -   Calculate missing MD5s (Oliver Heyme)
 17444      -   Docs, fixes and tests for MD5 calculation (Nick Craig-Wood)
 17445      -   Add optional MimeTyper interface. (Sebastian Bünger)
 17446      -   Implement optional About interface (for df support). (Sebastian
 17447          Bünger)
 17448  -   Mega
 17449      -   Wait for events instead of arbitrary sleeping (Nick Craig-Wood)
 17450      -   Add --mega-hard-delete flag (Nick Craig-Wood)
 17451      -   Fix failed logins with upper case chars in email (Nick
 17452          Craig-Wood)
 17453  -   Onedrive
 17454      -   Shared folder support (Yoni Jah)
 17455      -   Implement DirMove (Cnly)
 17456      -   Fix rmdir sometimes deleting directories with contents (Nick
 17457          Craig-Wood)
 17458  -   Pcloud
 17459      -   Delete half uploaded files on upload error (Nick Craig-Wood)
 17460  -   Qingstor
 17461      -   Remove leading / from paths (Nick Craig-Wood)
 17462  -   S3
 17463      -   Fix index out of range error with --fast-list (Nick Craig-Wood)
 17464      -   Add --s3-force-path-style (Nick Craig-Wood)
 17465      -   Add support for KMS Key ID (bsteiss)
 17466      -   Remove leading / from paths (Nick Craig-Wood)
 17467  -   Swift
 17468      -   Add storage_policy (Ruben Vandamme)
 17469      -   Make it so just storage_url or auth_token can be overidden (Nick
 17470          Craig-Wood)
 17471      -   Fix server side copy bug for unusal file names (Nick Craig-Wood)
 17472      -   Remove leading / from paths (Nick Craig-Wood)
 17473  -   WebDAV
 17474      -   Ensure we call MKCOL with a URL with a trailing / for QNAP
 17475          interop (Nick Craig-Wood)
 17476      -   If root ends with / then don’t check if it is a file (Nick
 17477          Craig-Wood)
 17478      -   Don’t accept redirects when reading metadata (Nick Craig-Wood)
 17479      -   Add bearer token (Macaroon) support for dCache (Nick Craig-Wood)
 17480      -   Document dCache and Macaroons (Onno Zweers)
 17481      -   Sharepoint recursion with different depth (Henning)
 17482      -   Attempt to remove failed uploads (Nick Craig-Wood)
 17483  -   Yandex
 17484      -   Fix listing/deleting files in the root (Nick Craig-Wood)
 17485  
 17486  
 17487  v1.42 - 2018-06-16
 17488  
 17489  -   New backends
 17490      -   OpenDrive (Oliver Heyme, Jakub Karlicek, ncw)
 17491  -   New commands
 17492      -   deletefile command (Filip Bartodziej)
 17493  -   New Features
 17494      -   copy, move: Copy single files directly, don’t use --files-from
 17495          work-around
 17496          -   this makes them much more efficient
 17497      -   Implement --max-transfer flag to quit transferring at a limit
 17498          -   make exit code 8 for --max-transfer exceeded
 17499      -   copy: copy empty source directories to destination (Ishuah
 17500          Kariuki)
 17501      -   check: Add --one-way flag (Kasper Byrdal Nielsen)
 17502      -   Add siginfo handler for macOS for ctrl-T stats (kubatasiemski)
 17503      -   rc
 17504          -   add core/gc to run a garbage collection on demand
 17505          -   enable go profiling by default on the --rc port
 17506          -   return error from remote on failure
 17507      -   lsf
 17508          -   Add --absolute flag to add a leading / onto path names
 17509          -   Add --csv flag for compliant CSV output
 17510          -   Add ‘m’ format specifier to show the MimeType
 17511          -   Implement ‘i’ format for showing object ID
 17512      -   lsjson
 17513          -   Add MimeType to the output
 17514          -   Add ID field to output to show Object ID
 17515      -   Add --retries-sleep flag (Benjamin Joseph Dag)
 17516      -   Oauth tidy up web page and error handling (Henning Surmeier)
 17517  -   Bug Fixes
 17518      -   Password prompt output with --log-file fixed for unix (Filip
 17519          Bartodziej)
 17520      -   Calculate ModifyWindow each time on the fly to fix various
 17521          problems (Stefan Breunig)
 17522  -   Mount
 17523      -   Only print “File.rename error” if there actually is an error
 17524          (Stefan Breunig)
 17525      -   Delay rename if file has open writers instead of failing
 17526          outright (Stefan Breunig)
 17527      -   Ensure atexit gets run on interrupt
 17528      -   macOS enhancements
 17529          -   Make --noappledouble --noapplexattr
 17530          -   Add --volname flag and remove special chars from it
 17531          -   Make Get/List/Set/Remove xattr return ENOSYS for efficiency
 17532          -   Make --daemon work for macOS without CGO
 17533  -   VFS
 17534      -   Add --vfs-read-chunk-size and --vfs-read-chunk-size-limit
 17535          (Fabian Möller)
 17536      -   Fix ChangeNotify for new or changed folders (Fabian Möller)
 17537  -   Local
 17538      -   Fix symlink/junction point directory handling under Windows
 17539          -   NB you will need to add -L to your command line to copy
 17540              files with reparse points
 17541  -   Cache
 17542      -   Add non cached dirs on notifications (Remus Bunduc)
 17543      -   Allow root to be expired from rc (Remus Bunduc)
 17544      -   Clean remaining empty folders from temp upload path (Remus
 17545          Bunduc)
 17546      -   Cache lists using batch writes (Remus Bunduc)
 17547      -   Use secure websockets for HTTPS Plex addresses (John Clayton)
 17548      -   Reconnect plex websocket on failures (Remus Bunduc)
 17549      -   Fix panic when running without plex configs (Remus Bunduc)
 17550      -   Fix root folder caching (Remus Bunduc)
 17551  -   Crypt
 17552      -   Check the crypted hash of files when uploading for extra data
 17553          security
 17554  -   Dropbox
 17555      -   Make Dropbox for business folders accessible using an initial /
 17556          in the path
 17557  -   Google Cloud Storage
 17558      -   Low level retry all operations if necessary
 17559  -   Google Drive
 17560      -   Add --drive-acknowledge-abuse to download flagged files
 17561      -   Add --drive-alternate-export to fix large doc export
 17562      -   Don’t attempt to choose Team Drives when using rclone config
 17563          create
 17564      -   Fix change list polling with team drives
 17565      -   Fix ChangeNotify for folders (Fabian Möller)
 17566      -   Fix about (and df on a mount) for team drives
 17567  -   Onedrive
 17568      -   Errorhandler for onedrive for business requests (Henning
 17569          Surmeier)
 17570  -   S3
 17571      -   Adjust upload concurrency with --s3-upload-concurrency
 17572          (themylogin)
 17573      -   Fix --s3-chunk-size which was always using the minimum
 17574  -   SFTP
 17575      -   Add --ssh-path-override flag (Piotr Oleszczyk)
 17576      -   Fix slow downloads for long latency connections
 17577  -   Webdav
 17578      -   Add workarounds for biz.mail.ru
 17579      -   Ignore Reason-Phrase in status line to fix 4shared (Rodrigo)
 17580      -   Better error message generation
 17581  
 17582  
 17583  v1.41 - 2018-04-28
 17584  
 17585  -   New backends
 17586      -   Mega support added
 17587      -   Webdav now supports SharePoint cookie authentication (hensur)
 17588  -   New commands
 17589      -   link: create public link to files and folders (Stefan Breunig)
 17590      -   about: gets quota info from a remote (a-roussos, ncw)
 17591      -   hashsum: a generic tool for any hash to produce md5sum like
 17592          output
 17593  -   New Features
 17594      -   lsd: Add -R flag and fix and update docs for all ls commands
 17595      -   ncdu: added a “refresh” key - CTRL-L (Keith Goldfarb)
 17596      -   serve restic: Add append-only mode (Steve Kriss)
 17597      -   serve restic: Disallow overwriting files in append-only mode
 17598          (Alexander Neumann)
 17599      -   serve restic: Print actual listener address (Matt Holt)
 17600      -   size: Add –json flag (Matthew Holt)
 17601      -   sync: implement –ignore-errors (Mateusz Pabian)
 17602      -   dedupe: Add dedupe largest functionality (Richard Yang)
 17603      -   fs: Extend SizeSuffix to include TB and PB for rclone about
 17604      -   fs: add –dump goroutines and –dump openfiles for debugging
 17605      -   rc: implement core/memstats to print internal memory usage info
 17606      -   rc: new call rc/pid (Michael P. Dubner)
 17607  -   Compile
 17608      -   Drop support for go1.6
 17609  -   Release
 17610      -   Fix make tarball (Chih-Hsuan Yen)
 17611  -   Bug Fixes
 17612      -   filter: fix –min-age and –max-age together check
 17613      -   fs: limit MaxIdleConns and MaxIdleConnsPerHost in transport
 17614      -   lsd,lsf: make sure all times we output are in local time
 17615      -   rc: fix setting bwlimit to unlimited
 17616      -   rc: take note of the –rc-addr flag too as per the docs
 17617  -   Mount
 17618      -   Use About to return the correct disk total/used/free (eg in df)
 17619      -   Set --attr-timeout default to 1s - fixes:
 17620          -   rclone using too much memory
 17621          -   rclone not serving files to samba
 17622          -   excessive time listing directories
 17623      -   Fix df -i (upstream fix)
 17624  -   VFS
 17625      -   Filter files . and .. from directory listing
 17626      -   Only make the VFS cache if –vfs-cache-mode > Off
 17627  -   Local
 17628      -   Add –local-no-check-updated to disable updated file checks
 17629      -   Retry remove on Windows sharing violation error
 17630  -   Cache
 17631      -   Flush the memory cache after close
 17632      -   Purge file data on notification
 17633      -   Always forget parent dir for notifications
 17634      -   Integrate with Plex websocket
 17635      -   Add rc cache/stats (seuffert)
 17636      -   Add info log on notification
 17637  -   Box
 17638      -   Fix failure reading large directories - parse file/directory
 17639          size as float
 17640  -   Dropbox
 17641      -   Fix crypt+obfuscate on dropbox
 17642      -   Fix repeatedly uploading the same files
 17643  -   FTP
 17644      -   Work around strange response from box FTP server
 17645      -   More workarounds for FTP servers to fix mkParentDir error
 17646      -   Fix no error on listing non-existent directory
 17647  -   Google Cloud Storage
 17648      -   Add service_account_credentials (Matt Holt)
 17649      -   Detect bucket presence by listing it - minimises permissions
 17650          needed
 17651      -   Ignore zero length directory markers
 17652  -   Google Drive
 17653      -   Add service_account_credentials (Matt Holt)
 17654      -   Fix directory move leaving a hardlinked directory behind
 17655      -   Return proper google errors when Opening files
 17656      -   When initialized with a filepath, optional features used
 17657          incorrect root path (Stefan Breunig)
 17658  -   HTTP
 17659      -   Fix sync for servers which don’t return Content-Length in HEAD
 17660  -   Onedrive
 17661      -   Add QuickXorHash support for OneDrive for business
 17662      -   Fix socket leak in multipart session upload
 17663  -   S3
 17664      -   Look in S3 named profile files for credentials
 17665      -   Add --s3-disable-checksum to disable checksum uploading (Chris
 17666          Redekop)
 17667      -   Hierarchical configuration support (Giri Badanahatti)
 17668      -   Add in config for all the supported S3 providers
 17669      -   Add One Zone Infrequent Access storage class (Craig Rachel)
 17670      -   Add –use-server-modtime support (Peter Baumgartner)
 17671      -   Add –s3-chunk-size option to control multipart uploads
 17672      -   Ignore zero length directory markers
 17673  -   SFTP
 17674      -   Update docs to match code, fix typos and clarify
 17675          disable_hashcheck prompt (Michael G. Noll)
 17676      -   Update docs with Synology quirks
 17677      -   Fail soft with a debug on hash failure
 17678  -   Swift
 17679      -   Add –use-server-modtime support (Peter Baumgartner)
 17680  -   Webdav
 17681      -   Support SharePoint cookie authentication (hensur)
 17682      -   Strip leading and trailing / off root
 17683  
 17684  
 17685  v1.40 - 2018-03-19
 17686  
 17687  -   New backends
 17688      -   Alias backend to create aliases for existing remote names
 17689          (Fabian Möller)
 17690  -   New commands
 17691      -   lsf: list for parsing purposes (Jakub Tasiemski)
 17692          -   by default this is a simple non recursive list of files and
 17693              directories
 17694          -   it can be configured to add more info in an easy to parse
 17695              way
 17696      -   serve restic: for serving a remote as a Restic REST endpoint
 17697          -   This enables restic to use any backends that rclone can
 17698              access
 17699          -   Thanks Alexander Neumann for help, patches and review
 17700      -   rc: enable the remote control of a running rclone
 17701          -   The running rclone must be started with –rc and related
 17702              flags.
 17703          -   Currently there is support for bwlimit, and flushing for
 17704              mount and cache.
 17705  -   New Features
 17706      -   --max-delete flag to add a delete threshold (Bjørn Erik
 17707          Pedersen)
 17708      -   All backends now support RangeOption for ranged Open
 17709          -   cat: Use RangeOption for limited fetches to make more
 17710              efficient
 17711          -   cryptcheck: make reading of nonce more efficient with
 17712              RangeOption
 17713      -   serve http/webdav/restic
 17714          -   support SSL/TLS
 17715          -   add --user --pass and --htpasswd for authentication
 17716      -   copy/move: detect file size change during copy/move and abort
 17717          transfer (ishuah)
 17718      -   cryptdecode: added option to return encrypted file names.
 17719          (ishuah)
 17720      -   lsjson: add --encrypted to show encrypted name (Jakub Tasiemski)
 17721      -   Add --stats-file-name-length to specify the printed file name
 17722          length for stats (Will Gunn)
 17723  -   Compile
 17724      -   Code base was shuffled and factored
 17725          -   backends moved into a backend directory
 17726          -   large packages split up
 17727          -   See the CONTRIBUTING.md doc for info as to what lives where
 17728              now
 17729      -   Update to using go1.10 as the default go version
 17730      -   Implement daily full integration tests
 17731  -   Release
 17732      -   Include a source tarball and sign it and the binaries
 17733      -   Sign the git tags as part of the release process
 17734      -   Add .deb and .rpm packages as part of the build
 17735      -   Make a beta release for all branches on the main repo (but not
 17736          pull requests)
 17737  -   Bug Fixes
 17738      -   config: fixes errors on non existing config by loading config
 17739          file only on first access
 17740      -   config: retry saving the config after failure (Mateusz)
 17741      -   sync: when using --backup-dir don’t delete files if we can’t set
 17742          their modtime
 17743          -   this fixes odd behaviour with Dropbox and --backup-dir
 17744      -   fshttp: fix idle timeouts for HTTP connections
 17745      -   serve http: fix serving files with : in - fixes
 17746      -   Fix --exclude-if-present to ignore directories which it doesn’t
 17747          have permission for (Iakov Davydov)
 17748      -   Make accounting work properly with crypt and b2
 17749      -   remove --no-traverse flag because it is obsolete
 17750  -   Mount
 17751      -   Add --attr-timeout flag to control attribute caching in kernel
 17752          -   this now defaults to 0 which is correct but less efficient
 17753          -   see the mount docs for more info
 17754      -   Add --daemon flag to allow mount to run in the background
 17755          (ishuah)
 17756      -   Fix: Return ENOSYS rather than EIO on attempted link
 17757          -   This fixes FileZilla accessing an rclone mount served over
 17758              sftp.
 17759      -   Fix setting modtime twice
 17760      -   Mount tests now run on CI for Linux (mount & cmount)/Mac/Windows
 17761      -   Many bugs fixed in the VFS layer - see below
 17762  -   VFS
 17763      -   Many fixes for --vfs-cache-mode writes and above
 17764          -   Update cached copy if we know it has changed (fixes stale
 17765              data)
 17766          -   Clean path names before using them in the cache
 17767          -   Disable cache cleaner if --vfs-cache-poll-interval=0
 17768          -   Fill and clean the cache immediately on startup
 17769      -   Fix Windows opening every file when it stats the file
 17770      -   Fix applying modtime for an open Write Handle
 17771      -   Fix creation of files when truncating
 17772      -   Write 0 bytes when flushing unwritten handles to avoid race
 17773          conditions in FUSE
 17774      -   Downgrade “poll-interval is not supported” message to Info
 17775      -   Make OpenFile and friends return EINVAL if O_RDONLY and O_TRUNC
 17776  -   Local
 17777      -   Downgrade “invalid cross-device link: trying copy” to debug
 17778      -   Make DirMove return fs.ErrorCantDirMove to allow fallback to
 17779          Copy for cross device
 17780      -   Fix race conditions updating the hashes
 17781  -   Cache
 17782      -   Add support for polling - cache will update when remote changes
 17783          on supported backends
 17784      -   Reduce log level for Plex api
 17785      -   Fix dir cache issue
 17786      -   Implement --cache-db-wait-time flag
 17787      -   Improve efficiency with RangeOption and RangeSeek
 17788      -   Fix dirmove with temp fs enabled
 17789      -   Notify vfs when using temp fs
 17790      -   Offline uploading
 17791      -   Remote control support for path flushing
 17792  -   Amazon cloud drive
 17793      -   Rclone no longer has any working keys - disable integration
 17794          tests
 17795      -   Implement DirChangeNotify to notify cache/vfs/mount of changes
 17796  -   Azureblob
 17797      -   Don’t check for bucket/container presense if listing was OK
 17798          -   this makes rclone do one less request per invocation
 17799      -   Improve accounting for chunked uploads
 17800  -   Backblaze B2
 17801      -   Don’t check for bucket/container presense if listing was OK
 17802          -   this makes rclone do one less request per invocation
 17803  -   Box
 17804      -   Improve accounting for chunked uploads
 17805  -   Dropbox
 17806      -   Fix custom oauth client parameters
 17807  -   Google Cloud Storage
 17808      -   Don’t check for bucket/container presense if listing was OK
 17809          -   this makes rclone do one less request per invocation
 17810  -   Google Drive
 17811      -   Migrate to api v3 (Fabian Möller)
 17812      -   Add scope configuration and root folder selection
 17813      -   Add --drive-impersonate for service accounts
 17814          -   thanks to everyone who tested, explored and contributed docs
 17815      -   Add --drive-use-created-date to use created date as modified
 17816          date (nbuchanan)
 17817      -   Request the export formats only when required
 17818          -   This makes rclone quicker when there are no google docs
 17819      -   Fix finding paths with latin1 chars (a workaround for a drive
 17820          bug)
 17821      -   Fix copying of a single Google doc file
 17822      -   Fix --drive-auth-owner-only to look in all directories
 17823  -   HTTP
 17824      -   Fix handling of directories with & in
 17825  -   Onedrive
 17826      -   Removed upload cutoff and always do session uploads
 17827          -   this stops the creation of multiple versions on business
 17828              onedrive
 17829      -   Overwrite object size value with real size when reading file.
 17830          (Victor)
 17831          -   this fixes oddities when onedrive misreports the size of
 17832              images
 17833  -   Pcloud
 17834      -   Remove unused chunked upload flag and code
 17835  -   Qingstor
 17836      -   Don’t check for bucket/container presense if listing was OK
 17837          -   this makes rclone do one less request per invocation
 17838  -   S3
 17839      -   Support hashes for multipart files (Chris Redekop)
 17840      -   Initial support for IBM COS (S3) (Giri Badanahatti)
 17841      -   Update docs to discourage use of v2 auth with CEPH and others
 17842      -   Don’t check for bucket/container presense if listing was OK
 17843          -   this makes rclone do one less request per invocation
 17844      -   Fix server side copy and set modtime on files with + in
 17845  -   SFTP
 17846      -   Add option to disable remote hash check command execution (Jon
 17847          Fautley)
 17848      -   Add --sftp-ask-password flag to prompt for password when needed
 17849          (Leo R. Lundgren)
 17850      -   Add set_modtime configuration option
 17851      -   Fix following of symlinks
 17852      -   Fix reading config file outside of Fs setup
 17853      -   Fix reading $USER in username fallback not $HOME
 17854      -   Fix running under crontab - Use correct OS way of reading
 17855          username
 17856  -   Swift
 17857      -   Fix refresh of authentication token
 17858          -   in v1.39 a bug was introduced which ignored new tokens -
 17859              this fixes it
 17860      -   Fix extra HEAD transaction when uploading a new file
 17861      -   Don’t check for bucket/container presense if listing was OK
 17862          -   this makes rclone do one less request per invocation
 17863  -   Webdav
 17864      -   Add new time formats to support mydrive.ch and others
 17865  
 17866  
 17867  v1.39 - 2017-12-23
 17868  
 17869  -   New backends
 17870      -   WebDAV
 17871          -   tested with nextcloud, owncloud, put.io and others!
 17872      -   Pcloud
 17873      -   cache - wraps a cache around other backends (Remus Bunduc)
 17874          -   useful in combination with mount
 17875          -   NB this feature is in beta so use with care
 17876  -   New commands
 17877      -   serve command with subcommands:
 17878          -   serve webdav: this implements a webdav server for any rclone
 17879              remote.
 17880          -   serve http: command to serve a remote over HTTP
 17881      -   config: add sub commands for full config file management
 17882          -   create/delete/dump/edit/file/password/providers/show/update
 17883      -   touch: to create or update the timestamp of a file (Jakub
 17884          Tasiemski)
 17885  -   New Features
 17886      -   curl install for rclone (Filip Bartodziej)
 17887      -   –stats now shows percentage, size, rate and ETA in condensed
 17888          form (Ishuah Kariuki)
 17889      -   –exclude-if-present to exclude a directory if a file is present
 17890          (Iakov Davydov)
 17891      -   rmdirs: add –leave-root flag (lewpam)
 17892      -   move: add –delete-empty-src-dirs flag to remove dirs after move
 17893          (Ishuah Kariuki)
 17894      -   Add –dump flag, introduce –dump requests, responses and remove
 17895          –dump-auth, –dump-filters
 17896          -   Obscure X-Auth-Token: from headers when dumping too
 17897      -   Document and implement exit codes for different failure modes
 17898          (Ishuah Kariuki)
 17899  -   Compile
 17900  -   Bug Fixes
 17901      -   Retry lots more different types of errors to make multipart
 17902          transfers more reliable
 17903      -   Save the config before asking for a token, fixes disappearing
 17904          oauth config
 17905      -   Warn the user if –include and –exclude are used together (Ernest
 17906          Borowski)
 17907      -   Fix duplicate files (eg on Google drive) causing spurious copies
 17908      -   Allow trailing and leading whitespace for passwords (Jason Rose)
 17909      -   ncdu: fix crashes on empty directories
 17910      -   rcat: fix goroutine leak
 17911      -   moveto/copyto: Fix to allow copying to the same name
 17912  -   Mount
 17913      -   –vfs-cache mode to make writes into mounts more reliable.
 17914          -   this requires caching files on the disk (see –cache-dir)
 17915          -   As this is a new feature, use with care
 17916      -   Use sdnotify to signal systemd the mount is ready (Fabian
 17917          Möller)
 17918      -   Check if directory is not empty before mounting (Ernest
 17919          Borowski)
 17920  -   Local
 17921      -   Add error message for cross file system moves
 17922      -   Fix equality check for times
 17923  -   Dropbox
 17924      -   Rework multipart upload
 17925          -   buffer the chunks when uploading large files so they can be
 17926              retried
 17927          -   change default chunk size to 48MB now we are buffering them
 17928              in memory
 17929          -   retry every error after the first chunk is done successfully
 17930      -   Fix error when renaming directories
 17931  -   Swift
 17932      -   Fix crash on bad authentication
 17933  -   Google Drive
 17934      -   Add service account support (Tim Cooijmans)
 17935  -   S3
 17936      -   Make it work properly with Digital Ocean Spaces (Andrew
 17937          Starr-Bochicchio)
 17938      -   Fix crash if a bad listing is received
 17939      -   Add support for ECS task IAM roles (David Minor)
 17940  -   Backblaze B2
 17941      -   Fix multipart upload retries
 17942      -   Fix –hard-delete to make it work 100% of the time
 17943  -   Swift
 17944      -   Allow authentication with storage URL and auth key (Giovanni
 17945          Pizzi)
 17946      -   Add new fields for swift configuration to support IBM Bluemix
 17947          Swift (Pierre Carlson)
 17948      -   Add OS_TENANT_ID and OS_USER_ID to config
 17949      -   Allow configs with user id instead of user name
 17950      -   Check if swift segments container exists before creating (John
 17951          Leach)
 17952      -   Fix memory leak in swift transfers (upstream fix)
 17953  -   SFTP
 17954      -   Add option to enable the use of aes128-cbc cipher (Jon Fautley)
 17955  -   Amazon cloud drive
 17956      -   Fix download of large files failing with “Only one auth
 17957          mechanism allowed”
 17958  -   crypt
 17959      -   Option to encrypt directory names or leave them intact
 17960      -   Implement DirChangeNotify (Fabian Möller)
 17961  -   onedrive
 17962      -   Add option to choose resourceURL during setup of OneDrive
 17963          Business account if more than one is available for user
 17964  
 17965  
 17966  v1.38 - 2017-09-30
 17967  
 17968  -   New backends
 17969      -   Azure Blob Storage (thanks Andrei Dragomir)
 17970      -   Box
 17971      -   Onedrive for Business (thanks Oliver Heyme)
 17972      -   QingStor from QingCloud (thanks wuyu)
 17973  -   New commands
 17974      -   rcat - read from standard input and stream upload
 17975      -   tree - shows a nicely formatted recursive listing
 17976      -   cryptdecode - decode crypted file names (thanks ishuah)
 17977      -   config show - print the config file
 17978      -   config file - print the config file location
 17979  -   New Features
 17980      -   Empty directories are deleted on sync
 17981      -   dedupe - implement merging of duplicate directories
 17982      -   check and cryptcheck made more consistent and use less memory
 17983      -   cleanup for remaining remotes (thanks ishuah)
 17984      -   --immutable for ensuring that files don’t change (thanks Jacob
 17985          McNamee)
 17986      -   --user-agent option (thanks Alex McGrath Kraak)
 17987      -   --disable flag to disable optional features
 17988      -   --bind flag for choosing the local addr on outgoing connections
 17989      -   Support for zsh auto-completion (thanks bpicode)
 17990      -   Stop normalizing file names but do a normalized compare in sync
 17991  -   Compile
 17992      -   Update to using go1.9 as the default go version
 17993      -   Remove snapd build due to maintenance problems
 17994  -   Bug Fixes
 17995      -   Improve retriable error detection which makes multipart uploads
 17996          better
 17997      -   Make check obey --ignore-size
 17998      -   Fix bwlimit toggle in conjunction with schedules (thanks
 17999          cbruegg)
 18000      -   config ensures newly written config is on the same mount
 18001  -   Local
 18002      -   Revert to copy when moving file across file system boundaries
 18003      -   --skip-links to suppress symlink warnings (thanks Zhiming Wang)
 18004  -   Mount
 18005      -   Re-use rcat internals to support uploads from all remotes
 18006  -   Dropbox
 18007      -   Fix “entry doesn’t belong in directory” error
 18008      -   Stop using deprecated API methods
 18009  -   Swift
 18010      -   Fix server side copy to empty container with --fast-list
 18011  -   Google Drive
 18012      -   Change the default for --drive-use-trash to true
 18013  -   S3
 18014      -   Set session token when using STS (thanks Girish Ramakrishnan)
 18015      -   Glacier docs and error messages (thanks Jan Varho)
 18016      -   Read 1000 (not 1024) items in dir listings to fix Wasabi
 18017  -   Backblaze B2
 18018      -   Fix SHA1 mismatch when downloading files with no SHA1
 18019      -   Calculate missing hashes on the fly instead of spooling
 18020      -   --b2-hard-delete to permanently delete (not hide) files (thanks
 18021          John Papandriopoulos)
 18022  -   Hubic
 18023      -   Fix creating containers - no longer have to use the default
 18024          container
 18025  -   Swift
 18026      -   Optionally configure from a standard set of OpenStack
 18027          environment vars
 18028      -   Add endpoint_type config
 18029  -   Google Cloud Storage
 18030      -   Fix bucket creation to work with limited permission users
 18031  -   SFTP
 18032      -   Implement connection pooling for multiple ssh connections
 18033      -   Limit new connections per second
 18034      -   Add support for MD5 and SHA1 hashes where available (thanks
 18035          Christian Brüggemann)
 18036  -   HTTP
 18037      -   Fix URL encoding issues
 18038      -   Fix directories with : in
 18039      -   Fix panic with URL encoded content
 18040  
 18041  
 18042  v1.37 - 2017-07-22
 18043  
 18044  -   New backends
 18045      -   FTP - thanks to Antonio Messina
 18046      -   HTTP - thanks to Vasiliy Tolstov
 18047  -   New commands
 18048      -   rclone ncdu - for exploring a remote with a text based user
 18049          interface.
 18050      -   rclone lsjson - for listing with a machine readable output
 18051      -   rclone dbhashsum - to show Dropbox style hashes of files (local
 18052          or Dropbox)
 18053  -   New Features
 18054      -   Implement –fast-list flag
 18055          -   This allows remotes to list recursively if they can
 18056          -   This uses less transactions (important if you pay for them)
 18057          -   This may or may not be quicker
 18058          -   This will use more memory as it has to hold the listing in
 18059              memory
 18060          -   –old-sync-method deprecated - the remaining uses are covered
 18061              by –fast-list
 18062          -   This involved a major re-write of all the listing code
 18063      -   Add –tpslimit and –tpslimit-burst to limit transactions per
 18064          second
 18065          -   this is useful in conjuction with rclone mount to limit
 18066              external apps
 18067      -   Add –stats-log-level so can see –stats without -v
 18068      -   Print password prompts to stderr - Hraban Luyat
 18069      -   Warn about duplicate files when syncing
 18070      -   Oauth improvements
 18071          -   allow auth_url and token_url to be set in the config file
 18072          -   Print redirection URI if using own credentials.
 18073      -   Don’t Mkdir at the start of sync to save transactions
 18074  -   Compile
 18075      -   Update build to go1.8.3
 18076      -   Require go1.6 for building rclone
 18077      -   Compile 386 builds with “GO386=387” for maximum compatibility
 18078  -   Bug Fixes
 18079      -   Fix menu selection when no remotes
 18080      -   Config saving reworked to not kill the file if disk gets full
 18081      -   Don’t delete remote if name does not change while renaming
 18082      -   moveto, copyto: report transfers and checks as per move and copy
 18083  -   Local
 18084      -   Add –local-no-unicode-normalization flag - Bob Potter
 18085  -   Mount
 18086      -   Now supported on Windows using cgofuse and WinFsp - thanks to
 18087          Bill Zissimopoulos for much help
 18088      -   Compare checksums on upload/download via FUSE
 18089      -   Unmount when program ends with SIGINT (Ctrl+C) or SIGTERM -
 18090          Jérôme Vizcaino
 18091      -   On read only open of file, make open pending until first read
 18092      -   Make –read-only reject modify operations
 18093      -   Implement ModTime via FUSE for remotes that support it
 18094      -   Allow modTime to be changed even before all writers are closed
 18095      -   Fix panic on renames
 18096      -   Fix hang on errored upload
 18097  -   Crypt
 18098      -   Report the name:root as specified by the user
 18099      -   Add an “obfuscate” option for filename encryption - Stephen
 18100          Harris
 18101  -   Amazon Drive
 18102      -   Fix initialization order for token renewer
 18103      -   Remove revoked credentials, allow oauth proxy config and update
 18104          docs
 18105  -   B2
 18106      -   Reduce minimum chunk size to 5MB
 18107  -   Drive
 18108      -   Add team drive support
 18109      -   Reduce bandwidth by adding fields for partial responses - Martin
 18110          Kristensen
 18111      -   Implement –drive-shared-with-me flag to view shared with me
 18112          files - Danny Tsai
 18113      -   Add –drive-trashed-only to read only the files in the trash
 18114      -   Remove obsolete –drive-full-list
 18115      -   Add missing seek to start on retries of chunked uploads
 18116      -   Fix stats accounting for upload
 18117      -   Convert / in names to a unicode equivalent (/)
 18118      -   Poll for Google Drive changes when mounted
 18119  -   OneDrive
 18120      -   Fix the uploading of files with spaces
 18121      -   Fix initialization order for token renewer
 18122      -   Display speeds accurately when uploading - Yoni Jah
 18123      -   Swap to using http://localhost:53682/ as redirect URL - Michael
 18124          Ledin
 18125      -   Retry on token expired error, reset upload body on retry - Yoni
 18126          Jah
 18127  -   Google Cloud Storage
 18128      -   Add ability to specify location and storage class via config and
 18129          command line - thanks gdm85
 18130      -   Create container if necessary on server side copy
 18131      -   Increase directory listing chunk to 1000 to increase performance
 18132      -   Obtain a refresh token for GCS - Steven Lu
 18133  -   Yandex
 18134      -   Fix the name reported in log messages (was empty)
 18135      -   Correct error return for listing empty directory
 18136  -   Dropbox
 18137      -   Rewritten to use the v2 API
 18138          -   Now supports ModTime
 18139              -   Can only set by uploading the file again
 18140              -   If you uploaded with an old rclone, rclone may upload
 18141                  everything again
 18142              -   Use --size-only or --checksum to avoid this
 18143          -   Now supports the Dropbox content hashing scheme
 18144          -   Now supports low level retries
 18145  -   S3
 18146      -   Work around eventual consistency in bucket creation
 18147      -   Create container if necessary on server side copy
 18148      -   Add us-east-2 (Ohio) and eu-west-2 (London) S3 regions - Zahiar
 18149          Ahmed
 18150  -   Swift, Hubic
 18151      -   Fix zero length directory markers showing in the subdirectory
 18152          listing
 18153          -   this caused lots of duplicate transfers
 18154      -   Fix paged directory listings
 18155          -   this caused duplicate directory errors
 18156      -   Create container if necessary on server side copy
 18157      -   Increase directory listing chunk to 1000 to increase performance
 18158      -   Make sensible error if the user forgets the container
 18159  -   SFTP
 18160      -   Add support for using ssh key files
 18161      -   Fix under Windows
 18162      -   Fix ssh agent on Windows
 18163      -   Adapt to latest version of library - Igor Kharin
 18164  
 18165  
 18166  v1.36 - 2017-03-18
 18167  
 18168  -   New Features
 18169      -   SFTP remote (Jack Schmidt)
 18170      -   Re-implement sync routine to work a directory at a time reducing
 18171          memory usage
 18172      -   Logging revamped to be more inline with rsync - now much
 18173          quieter * -v only shows transfers * -vv is for full debug *
 18174          –syslog to log to syslog on capable platforms
 18175      -   Implement –backup-dir and –suffix
 18176      -   Implement –track-renames (initial implementation by Bjørn Erik
 18177          Pedersen)
 18178      -   Add time-based bandwidth limits (Lukas Loesche)
 18179      -   rclone cryptcheck: checks integrity of crypt remotes
 18180      -   Allow all config file variables and options to be set from
 18181          environment variables
 18182      -   Add –buffer-size parameter to control buffer size for copy
 18183      -   Make –delete-after the default
 18184      -   Add –ignore-checksum flag (fixed by Hisham Zarka)
 18185      -   rclone check: Add –download flag to check all the data, not just
 18186          hashes
 18187      -   rclone cat: add –head, –tail, –offset, –count and –discard
 18188      -   rclone config: when choosing from a list, allow the value to be
 18189          entered too
 18190      -   rclone config: allow rename and copy of remotes
 18191      -   rclone obscure: for generating encrypted passwords for rclone’s
 18192          config (T.C. Ferguson)
 18193      -   Comply with XDG Base Directory specification (Dario Giovannetti)
 18194          -   this moves the default location of the config file in a
 18195              backwards compatible way
 18196      -   Release changes
 18197          -   Ubuntu snap support (Dedsec1)
 18198          -   Compile with go 1.8
 18199          -   MIPS/Linux big and little endian support
 18200  -   Bug Fixes
 18201      -   Fix copyto copying things to the wrong place if the destination
 18202          dir didn’t exist
 18203      -   Fix parsing of remotes in moveto and copyto
 18204      -   Fix –delete-before deleting files on copy
 18205      -   Fix –files-from with an empty file copying everything
 18206      -   Fix sync: don’t update mod times if –dry-run set
 18207      -   Fix MimeType propagation
 18208      -   Fix filters to add ** rules to directory rules
 18209  -   Local
 18210      -   Implement -L, –copy-links flag to allow rclone to follow
 18211          symlinks
 18212      -   Open files in write only mode so rclone can write to an rclone
 18213          mount
 18214      -   Fix unnormalised unicode causing problems reading directories
 18215      -   Fix interaction between -x flag and –max-depth
 18216  -   Mount
 18217      -   Implement proper directory handling (mkdir, rmdir, renaming)
 18218      -   Make include and exclude filters apply to mount
 18219      -   Implement read and write async buffers - control with
 18220          –buffer-size
 18221      -   Fix fsync on for directories
 18222      -   Fix retry on network failure when reading off crypt
 18223  -   Crypt
 18224      -   Add –crypt-show-mapping to show encrypted file mapping
 18225      -   Fix crypt writer getting stuck in a loop
 18226          -   IMPORTANT this bug had the potential to cause data
 18227              corruption when
 18228              -   reading data from a network based remote and
 18229              -   writing to a crypt on Google Drive
 18230          -   Use the cryptcheck command to validate your data if you are
 18231              concerned
 18232          -   If syncing two crypt remotes, sync the unencrypted remote
 18233  -   Amazon Drive
 18234      -   Fix panics on Move (rename)
 18235      -   Fix panic on token expiry
 18236  -   B2
 18237      -   Fix inconsistent listings and rclone check
 18238      -   Fix uploading empty files with go1.8
 18239      -   Constrain memory usage when doing multipart uploads
 18240      -   Fix upload url not being refreshed properly
 18241  -   Drive
 18242      -   Fix Rmdir on directories with trashed files
 18243      -   Fix “Ignoring unknown object” when downloading
 18244      -   Add –drive-list-chunk
 18245      -   Add –drive-skip-gdocs (Károly Oláh)
 18246  -   OneDrive
 18247      -   Implement Move
 18248      -   Fix Copy
 18249          -   Fix overwrite detection in Copy
 18250          -   Fix waitForJob to parse errors correctly
 18251      -   Use token renewer to stop auth errors on long uploads
 18252      -   Fix uploading empty files with go1.8
 18253  -   Google Cloud Storage
 18254      -   Fix depth 1 directory listings
 18255  -   Yandex
 18256      -   Fix single level directory listing
 18257  -   Dropbox
 18258      -   Normalise the case for single level directory listings
 18259      -   Fix depth 1 listing
 18260  -   S3
 18261      -   Added ca-central-1 region (Jon Yergatian)
 18262  
 18263  
 18264  v1.35 - 2017-01-02
 18265  
 18266  -   New Features
 18267      -   moveto and copyto commands for choosing a destination name on
 18268          copy/move
 18269      -   rmdirs command to recursively delete empty directories
 18270      -   Allow repeated –include/–exclude/–filter options
 18271      -   Only show transfer stats on commands which transfer stuff
 18272          -   show stats on any command using the --stats flag
 18273      -   Allow overlapping directories in move when server side dir move
 18274          is supported
 18275      -   Add –stats-unit option - thanks Scott McGillivray
 18276  -   Bug Fixes
 18277      -   Fix the config file being overwritten when two rclones are
 18278          running
 18279      -   Make rclone lsd obey the filters properly
 18280      -   Fix compilation on mips
 18281      -   Fix not transferring files that don’t differ in size
 18282      -   Fix panic on nil retry/fatal error
 18283  -   Mount
 18284      -   Retry reads on error - should help with reliability a lot
 18285      -   Report the modification times for directories from the remote
 18286      -   Add bandwidth accounting and limiting (fixes –bwlimit)
 18287      -   If –stats provided will show stats and which files are
 18288          transferring
 18289      -   Support R/W files if truncate is set.
 18290      -   Implement statfs interface so df works
 18291      -   Note that write is now supported on Amazon Drive
 18292      -   Report number of blocks in a file - thanks Stefan Breunig
 18293  -   Crypt
 18294      -   Prevent the user pointing crypt at itself
 18295      -   Fix failed to authenticate decrypted block errors
 18296          -   these will now return the underlying unexpected EOF instead
 18297  -   Amazon Drive
 18298      -   Add support for server side move and directory move - thanks
 18299          Stefan Breunig
 18300      -   Fix nil pointer deref on size attribute
 18301  -   B2
 18302      -   Use new prefix and delimiter parameters in directory listings
 18303          -   This makes –max-depth 1 dir listings as used in mount much
 18304              faster
 18305      -   Reauth the account while doing uploads too - should help with
 18306          token expiry
 18307  -   Drive
 18308      -   Make DirMove more efficient and complain about moving the root
 18309      -   Create destination directory on Move()
 18310  
 18311  
 18312  v1.34 - 2016-11-06
 18313  
 18314  -   New Features
 18315      -   Stop single file and --files-from operations iterating through
 18316          the source bucket.
 18317      -   Stop removing failed upload to cloud storage remotes
 18318      -   Make ContentType be preserved for cloud to cloud copies
 18319      -   Add support to toggle bandwidth limits via SIGUSR2 - thanks
 18320          Marco Paganini
 18321      -   rclone check shows count of hashes that couldn’t be checked
 18322      -   rclone listremotes command
 18323      -   Support linux/arm64 build - thanks Fredrik Fornwall
 18324      -   Remove Authorization: lines from --dump-headers output
 18325  -   Bug Fixes
 18326      -   Ignore files with control characters in the names
 18327      -   Fix rclone move command
 18328          -   Delete src files which already existed in dst
 18329          -   Fix deletion of src file when dst file older
 18330      -   Fix rclone check on crypted file systems
 18331      -   Make failed uploads not count as “Transferred”
 18332      -   Make sure high level retries show with -q
 18333      -   Use a vendor directory with godep for repeatable builds
 18334  -   rclone mount - FUSE
 18335      -   Implement FUSE mount options
 18336          -   --no-modtime, --debug-fuse, --read-only, --allow-non-empty,
 18337              --allow-root, --allow-other
 18338          -   --default-permissions, --write-back-cache, --max-read-ahead,
 18339              --umask, --uid, --gid
 18340      -   Add --dir-cache-time to control caching of directory entries
 18341      -   Implement seek for files opened for read (useful for video
 18342          players)
 18343          -   with -no-seek flag to disable
 18344      -   Fix crash on 32 bit ARM (alignment of 64 bit counter)
 18345      -   …and many more internal fixes and improvements!
 18346  -   Crypt
 18347      -   Don’t show encrypted password in configurator to stop confusion
 18348  -   Amazon Drive
 18349      -   New wait for upload option --acd-upload-wait-per-gb
 18350          -   upload timeouts scale by file size and can be disabled
 18351      -   Add 502 Bad Gateway to list of errors we retry
 18352      -   Fix overwriting a file with a zero length file
 18353      -   Fix ACD file size warning limit - thanks Felix Bünemann
 18354  -   Local
 18355      -   Unix: implement -x/--one-file-system to stay on a single file
 18356          system
 18357          -   thanks Durval Menezes and Luiz Carlos Rumbelsperger Viana
 18358      -   Windows: ignore the symlink bit on files
 18359      -   Windows: Ignore directory based junction points
 18360  -   B2
 18361      -   Make sure each upload has at least one upload slot - fixes
 18362          strange upload stats
 18363      -   Fix uploads when using crypt
 18364      -   Fix download of large files (sha1 mismatch)
 18365      -   Return error when we try to create a bucket which someone else
 18366          owns
 18367      -   Update B2 docs with Data usage, and Crypt section - thanks
 18368          Tomasz Mazur
 18369  -   S3
 18370      -   Command line and config file support for
 18371          -   Setting/overriding ACL - thanks Radek Senfeld
 18372          -   Setting storage class - thanks Asko Tamm
 18373  -   Drive
 18374      -   Make exponential backoff work exactly as per Google
 18375          specification
 18376      -   add .epub, .odp and .tsv as export formats.
 18377  -   Swift
 18378      -   Don’t read metadata for directory marker objects
 18379  
 18380  
 18381  v1.33 - 2016-08-24
 18382  
 18383  -   New Features
 18384      -   Implement encryption
 18385          -   data encrypted in NACL secretbox format
 18386          -   with optional file name encryption
 18387      -   New commands
 18388          -   rclone mount - implements FUSE mounting of remotes
 18389              (EXPERIMENTAL)
 18390              -   works on Linux, FreeBSD and OS X (need testers for the
 18391                  last 2!)
 18392          -   rclone cat - outputs remote file or files to the terminal
 18393          -   rclone genautocomplete - command to make a bash completion
 18394              script for rclone
 18395      -   Editing a remote using rclone config now goes through the wizard
 18396      -   Compile with go 1.7 - this fixes rclone on macOS Sierra and on
 18397          386 processors
 18398      -   Use cobra for sub commands and docs generation
 18399  -   drive
 18400      -   Document how to make your own client_id
 18401  -   s3
 18402      -   User-configurable Amazon S3 ACL (thanks Radek Šenfeld)
 18403  -   b2
 18404      -   Fix stats accounting for upload - no more jumping to 100% done
 18405      -   On cleanup delete hide marker if it is the current file
 18406      -   New B2 API endpoint (thanks Per Cederberg)
 18407      -   Set maximum backoff to 5 Minutes
 18408  -   onedrive
 18409      -   Fix URL escaping in file names - eg uploading files with + in
 18410          them.
 18411  -   amazon cloud drive
 18412      -   Fix token expiry during large uploads
 18413      -   Work around 408 REQUEST_TIMEOUT and 504 GATEWAY_TIMEOUT errors
 18414  -   local
 18415      -   Fix filenames with invalid UTF-8 not being uploaded
 18416      -   Fix problem with some UTF-8 characters on OS X
 18417  
 18418  
 18419  v1.32 - 2016-07-13
 18420  
 18421  -   Backblaze B2
 18422      -   Fix upload of files large files not in root
 18423  
 18424  
 18425  v1.31 - 2016-07-13
 18426  
 18427  -   New Features
 18428      -   Reduce memory on sync by about 50%
 18429      -   Implement –no-traverse flag to stop copy traversing the
 18430          destination remote.
 18431          -   This can be used to reduce memory usage down to the smallest
 18432              possible.
 18433          -   Useful to copy a small number of files into a large
 18434              destination folder.
 18435      -   Implement cleanup command for emptying trash / removing old
 18436          versions of files
 18437          -   Currently B2 only
 18438      -   Single file handling improved
 18439          -   Now copied with –files-from
 18440          -   Automatically sets –no-traverse when copying a single file
 18441      -   Info on using installing with ansible - thanks Stefan Weichinger
 18442      -   Implement –no-update-modtime flag to stop rclone fixing the
 18443          remote modified times.
 18444  -   Bug Fixes
 18445      -   Fix move command - stop it running for overlapping Fses - this
 18446          was causing data loss.
 18447  -   Local
 18448      -   Fix incomplete hashes - this was causing problems for B2.
 18449  -   Amazon Drive
 18450      -   Rename Amazon Cloud Drive to Amazon Drive - no changes to config
 18451          file needed.
 18452  -   Swift
 18453      -   Add support for non-default project domain - thanks Antonio
 18454          Messina.
 18455  -   S3
 18456      -   Add instructions on how to use rclone with minio.
 18457      -   Add ap-northeast-2 (Seoul) and ap-south-1 (Mumbai) regions.
 18458      -   Skip setting the modified time for objects > 5GB as it isn’t
 18459          possible.
 18460  -   Backblaze B2
 18461      -   Add –b2-versions flag so old versions can be listed and
 18462          retreived.
 18463      -   Treat 403 errors (eg cap exceeded) as fatal.
 18464      -   Implement cleanup command for deleting old file versions.
 18465      -   Make error handling compliant with B2 integrations notes.
 18466      -   Fix handling of token expiry.
 18467      -   Implement –b2-test-mode to set X-Bz-Test-Mode header.
 18468      -   Set cutoff for chunked upload to 200MB as per B2 guidelines.
 18469      -   Make upload multi-threaded.
 18470  -   Dropbox
 18471      -   Don’t retry 461 errors.
 18472  
 18473  
 18474  v1.30 - 2016-06-18
 18475  
 18476  -   New Features
 18477      -   Directory listing code reworked for more features and better
 18478          error reporting (thanks to Klaus Post for help). This enables
 18479          -   Directory include filtering for efficiency
 18480          -   –max-depth parameter
 18481          -   Better error reporting
 18482          -   More to come
 18483      -   Retry more errors
 18484      -   Add –ignore-size flag - for uploading images to onedrive
 18485      -   Log -v output to stdout by default
 18486      -   Display the transfer stats in more human readable form
 18487      -   Make 0 size files specifiable with --max-size 0b
 18488      -   Add b suffix so we can specify bytes in –bwlimit, –min-size etc
 18489      -   Use “password:” instead of “password>” prompt - thanks Klaus
 18490          Post and Leigh Klotz
 18491  -   Bug Fixes
 18492      -   Fix retry doing one too many retries
 18493  -   Local
 18494      -   Fix problems with OS X and UTF-8 characters
 18495  -   Amazon Drive
 18496      -   Check a file exists before uploading to help with 408 Conflict
 18497          errors
 18498      -   Reauth on 401 errors - this has been causing a lot of problems
 18499      -   Work around spurious 403 errors
 18500      -   Restart directory listings on error
 18501  -   Google Drive
 18502      -   Check a file exists before uploading to help with duplicates
 18503      -   Fix retry of multipart uploads
 18504  -   Backblaze B2
 18505      -   Implement large file uploading
 18506  -   S3
 18507      -   Add AES256 server-side encryption for - thanks Justin R. Wilson
 18508  -   Google Cloud Storage
 18509      -   Make sure we don’t use conflicting content types on upload
 18510      -   Add service account support - thanks Michal Witkowski
 18511  -   Swift
 18512      -   Add auth version parameter
 18513      -   Add domain option for openstack (v3 auth) - thanks Fabian Ruff
 18514  
 18515  
 18516  v1.29 - 2016-04-18
 18517  
 18518  -   New Features
 18519      -   Implement -I, --ignore-times for unconditional upload
 18520      -   Improve dedupecommand
 18521          -   Now removes identical copies without asking
 18522          -   Now obeys --dry-run
 18523          -   Implement --dedupe-mode for non interactive running
 18524              -   --dedupe-mode interactive - interactive the default.
 18525              -   --dedupe-mode skip - removes identical files then skips
 18526                  anything left.
 18527              -   --dedupe-mode first - removes identical files then keeps
 18528                  the first one.
 18529              -   --dedupe-mode newest - removes identical files then
 18530                  keeps the newest one.
 18531              -   --dedupe-mode oldest - removes identical files then
 18532                  keeps the oldest one.
 18533              -   --dedupe-mode rename - removes identical files then
 18534                  renames the rest to be different.
 18535  -   Bug fixes
 18536      -   Make rclone check obey the --size-only flag.
 18537      -   Use “application/octet-stream” if discovered mime type is
 18538          invalid.
 18539      -   Fix missing “quit” option when there are no remotes.
 18540  -   Google Drive
 18541      -   Increase default chunk size to 8 MB - increases upload speed of
 18542          big files
 18543      -   Speed up directory listings and make more reliable
 18544      -   Add missing retries for Move and DirMove - increases reliability
 18545      -   Preserve mime type on file update
 18546  -   Backblaze B2
 18547      -   Enable mod time syncing
 18548          -   This means that B2 will now check modification times
 18549          -   It will upload new files to update the modification times
 18550          -   (there isn’t an API to just set the mod time.)
 18551          -   If you want the old behaviour use --size-only.
 18552      -   Update API to new version
 18553      -   Fix parsing of mod time when not in metadata
 18554  -   Swift/Hubic
 18555      -   Don’t return an MD5SUM for static large objects
 18556  -   S3
 18557      -   Fix uploading files bigger than 50GB
 18558  
 18559  
 18560  v1.28 - 2016-03-01
 18561  
 18562  -   New Features
 18563      -   Configuration file encryption - thanks Klaus Post
 18564      -   Improve rclone config adding more help and making it easier to
 18565          understand
 18566      -   Implement -u/--update so creation times can be used on all
 18567          remotes
 18568      -   Implement --low-level-retries flag
 18569      -   Optionally disable gzip compression on downloads with
 18570          --no-gzip-encoding
 18571  -   Bug fixes
 18572      -   Don’t make directories if --dry-run set
 18573      -   Fix and document the move command
 18574      -   Fix redirecting stderr on unix-like OSes when using --log-file
 18575      -   Fix delete command to wait until all finished - fixes missing
 18576          deletes.
 18577  -   Backblaze B2
 18578      -   Use one upload URL per go routine fixes
 18579          more than one upload using auth token
 18580      -   Add pacing, retries and reauthentication - fixes token expiry
 18581          problems
 18582      -   Upload without using a temporary file from local (and remotes
 18583          which support SHA1)
 18584      -   Fix reading metadata for all files when it shouldn’t have been
 18585  -   Drive
 18586      -   Fix listing drive documents at root
 18587      -   Disable copy and move for Google docs
 18588  -   Swift
 18589      -   Fix uploading of chunked files with non ASCII characters
 18590      -   Allow setting of storage_url in the config - thanks Xavier Lucas
 18591  -   S3
 18592      -   Allow IAM role and credentials from environment variables -
 18593          thanks Brian Stengaard
 18594      -   Allow low privilege users to use S3 (check if directory exists
 18595          during Mkdir) - thanks Jakub Gedeon
 18596  -   Amazon Drive
 18597      -   Retry on more things to make directory listings more reliable
 18598  
 18599  
 18600  v1.27 - 2016-01-31
 18601  
 18602  -   New Features
 18603      -   Easier headless configuration with rclone authorize
 18604      -   Add support for multiple hash types - we now check SHA1 as well
 18605          as MD5 hashes.
 18606      -   delete command which does obey the filters (unlike purge)
 18607      -   dedupe command to deduplicate a remote. Useful with Google
 18608          Drive.
 18609      -   Add --ignore-existing flag to skip all files that exist on
 18610          destination.
 18611      -   Add --delete-before, --delete-during, --delete-after flags.
 18612      -   Add --memprofile flag to debug memory use.
 18613      -   Warn the user about files with same name but different case
 18614      -   Make --include rules add their implict exclude * at the end of
 18615          the filter list
 18616      -   Deprecate compiling with go1.3
 18617  -   Amazon Drive
 18618      -   Fix download of files > 10 GB
 18619      -   Fix directory traversal (“Next token is expired”) for large
 18620          directory listings
 18621      -   Remove 409 conflict from error codes we will retry - stops very
 18622          long pauses
 18623  -   Backblaze B2
 18624      -   SHA1 hashes now checked by rclone core
 18625  -   Drive
 18626      -   Add --drive-auth-owner-only to only consider files owned by the
 18627          user - thanks Björn Harrtell
 18628      -   Export Google documents
 18629  -   Dropbox
 18630      -   Make file exclusion error controllable with -q
 18631  -   Swift
 18632      -   Fix upload from unprivileged user.
 18633  -   S3
 18634      -   Fix updating of mod times of files with + in.
 18635  -   Local
 18636      -   Add local file system option to disable UNC on Windows.
 18637  
 18638  
 18639  v1.26 - 2016-01-02
 18640  
 18641  -   New Features
 18642      -   Yandex storage backend - thank you Dmitry Burdeev (“dibu”)
 18643      -   Implement Backblaze B2 storage backend
 18644      -   Add –min-age and –max-age flags - thank you Adriano Aurélio
 18645          Meirelles
 18646      -   Make ls/lsl/md5sum/size/check obey includes and excludes
 18647  -   Fixes
 18648      -   Fix crash in http logging
 18649      -   Upload releases to github too
 18650  -   Swift
 18651      -   Fix sync for chunked files
 18652  -   OneDrive
 18653      -   Re-enable server side copy
 18654      -   Don’t mask HTTP error codes with JSON decode error
 18655  -   S3
 18656      -   Fix corrupting Content-Type on mod time update (thanks Joseph
 18657          Spurrier)
 18658  
 18659  
 18660  v1.25 - 2015-11-14
 18661  
 18662  -   New features
 18663      -   Implement Hubic storage system
 18664  -   Fixes
 18665      -   Fix deletion of some excluded files without –delete-excluded
 18666          -   This could have deleted files unexpectedly on sync
 18667          -   Always check first with --dry-run!
 18668  -   Swift
 18669      -   Stop SetModTime losing metadata (eg X-Object-Manifest)
 18670          -   This could have caused data loss for files > 5GB in size
 18671      -   Use ContentType from Object to avoid lookups in listings
 18672  -   OneDrive
 18673      -   disable server side copy as it seems to be broken at Microsoft
 18674  
 18675  
 18676  v1.24 - 2015-11-07
 18677  
 18678  -   New features
 18679      -   Add support for Microsoft OneDrive
 18680      -   Add --no-check-certificate option to disable server certificate
 18681          verification
 18682      -   Add async readahead buffer for faster transfer of big files
 18683  -   Fixes
 18684      -   Allow spaces in remotes and check remote names for validity at
 18685          creation time
 18686      -   Allow ‘&’ and disallow ‘:’ in Windows filenames.
 18687  -   Swift
 18688      -   Ignore directory marker objects where appropriate - allows
 18689          working with Hubic
 18690      -   Don’t delete the container if fs wasn’t at root
 18691  -   S3
 18692      -   Don’t delete the bucket if fs wasn’t at root
 18693  -   Google Cloud Storage
 18694      -   Don’t delete the bucket if fs wasn’t at root
 18695  
 18696  
 18697  v1.23 - 2015-10-03
 18698  
 18699  -   New features
 18700      -   Implement rclone size for measuring remotes
 18701  -   Fixes
 18702      -   Fix headless config for drive and gcs
 18703      -   Tell the user they should try again if the webserver method
 18704          failed
 18705      -   Improve output of --dump-headers
 18706  -   S3
 18707      -   Allow anonymous access to public buckets
 18708  -   Swift
 18709      -   Stop chunked operations logging “Failed to read info: Object Not
 18710          Found”
 18711      -   Use Content-Length on uploads for extra reliability
 18712  
 18713  
 18714  v1.22 - 2015-09-28
 18715  
 18716  -   Implement rsync like include and exclude flags
 18717  -   swift
 18718      -   Support files > 5GB - thanks Sergey Tolmachev
 18719  
 18720  
 18721  v1.21 - 2015-09-22
 18722  
 18723  -   New features
 18724      -   Display individual transfer progress
 18725      -   Make lsl output times in localtime
 18726  -   Fixes
 18727      -   Fix allowing user to override credentials again in Drive, GCS
 18728          and ACD
 18729  -   Amazon Drive
 18730      -   Implement compliant pacing scheme
 18731  -   Google Drive
 18732      -   Make directory reads concurrent for increased speed.
 18733  
 18734  
 18735  v1.20 - 2015-09-15
 18736  
 18737  -   New features
 18738      -   Amazon Drive support
 18739      -   Oauth support redone - fix many bugs and improve usability
 18740          -   Use “golang.org/x/oauth2” as oauth libary of choice
 18741          -   Improve oauth usability for smoother initial signup
 18742          -   drive, googlecloudstorage: optionally use auto config for
 18743              the oauth token
 18744      -   Implement –dump-headers and –dump-bodies debug flags
 18745      -   Show multiple matched commands if abbreviation too short
 18746      -   Implement server side move where possible
 18747  -   local
 18748      -   Always use UNC paths internally on Windows - fixes a lot of bugs
 18749  -   dropbox
 18750      -   force use of our custom transport which makes timeouts work
 18751  -   Thanks to Klaus Post for lots of help with this release
 18752  
 18753  
 18754  v1.19 - 2015-08-28
 18755  
 18756  -   New features
 18757      -   Server side copies for s3/swift/drive/dropbox/gcs
 18758      -   Move command - uses server side copies if it can
 18759      -   Implement –retries flag - tries 3 times by default
 18760      -   Build for plan9/amd64 and solaris/amd64 too
 18761  -   Fixes
 18762      -   Make a current version download with a fixed URL for scripting
 18763      -   Ignore rmdir in limited fs rather than throwing error
 18764  -   dropbox
 18765      -   Increase chunk size to improve upload speeds massively
 18766      -   Issue an error message when trying to upload bad file name
 18767  
 18768  
 18769  v1.18 - 2015-08-17
 18770  
 18771  -   drive
 18772      -   Add --drive-use-trash flag so rclone trashes instead of deletes
 18773      -   Add “Forbidden to download” message for files with no
 18774          downloadURL
 18775  -   dropbox
 18776      -   Remove datastore
 18777          -   This was deprecated and it caused a lot of problems
 18778          -   Modification times and MD5SUMs no longer stored
 18779      -   Fix uploading files > 2GB
 18780  -   s3
 18781      -   use official AWS SDK from github.com/aws/aws-sdk-go
 18782      -   NB will most likely require you to delete and recreate remote
 18783      -   enable multipart upload which enables files > 5GB
 18784      -   tested with Ceph / RadosGW / S3 emulation
 18785      -   many thanks to Sam Liston and Brian Haymore at the Utah Center
 18786          for High Performance Computing for a Ceph test account
 18787  -   misc
 18788      -   Show errors when reading the config file
 18789      -   Do not print stats in quiet mode - thanks Leonid Shalupov
 18790      -   Add FAQ
 18791      -   Fix created directories not obeying umask
 18792      -   Linux installation instructions - thanks Shimon Doodkin
 18793  
 18794  
 18795  v1.17 - 2015-06-14
 18796  
 18797  -   dropbox: fix case insensitivity issues - thanks Leonid Shalupov
 18798  
 18799  
 18800  v1.16 - 2015-06-09
 18801  
 18802  -   Fix uploading big files which was causing timeouts or panics
 18803  -   Don’t check md5sum after download with –size-only
 18804  
 18805  
 18806  v1.15 - 2015-06-06
 18807  
 18808  -   Add –checksum flag to only discard transfers by MD5SUM - thanks Alex
 18809      Couper
 18810  -   Implement –size-only flag to sync on size not checksum & modtime
 18811  -   Expand docs and remove duplicated information
 18812  -   Document rclone’s limitations with directories
 18813  -   dropbox: update docs about case insensitivity
 18814  
 18815  
 18816  v1.14 - 2015-05-21
 18817  
 18818  -   local: fix encoding of non utf-8 file names - fixes a duplicate file
 18819      problem
 18820  -   drive: docs about rate limiting
 18821  -   google cloud storage: Fix compile after API change in
 18822      “google.golang.org/api/storage/v1”
 18823  
 18824  
 18825  v1.13 - 2015-05-10
 18826  
 18827  -   Revise documentation (especially sync)
 18828  -   Implement –timeout and –conntimeout
 18829  -   s3: ignore etags from multipart uploads which aren’t md5sums
 18830  
 18831  
 18832  v1.12 - 2015-03-15
 18833  
 18834  -   drive: Use chunked upload for files above a certain size
 18835  -   drive: add –drive-chunk-size and –drive-upload-cutoff parameters
 18836  -   drive: switch to insert from update when a failed copy deletes the
 18837      upload
 18838  -   core: Log duplicate files if they are detected
 18839  
 18840  
 18841  v1.11 - 2015-03-04
 18842  
 18843  -   swift: add region parameter
 18844  -   drive: fix crash on failed to update remote mtime
 18845  -   In remote paths, change native directory separators to /
 18846  -   Add synchronization to ls/lsl/lsd output to stop corruptions
 18847  -   Ensure all stats/log messages to go stderr
 18848  -   Add –log-file flag to log everything (including panics) to file
 18849  -   Make it possible to disable stats printing with –stats=0
 18850  -   Implement –bwlimit to limit data transfer bandwidth
 18851  
 18852  
 18853  v1.10 - 2015-02-12
 18854  
 18855  -   s3: list an unlimited number of items
 18856  -   Fix getting stuck in the configurator
 18857  
 18858  
 18859  v1.09 - 2015-02-07
 18860  
 18861  -   windows: Stop drive letters (eg C:) getting mixed up with remotes
 18862      (eg drive:)
 18863  -   local: Fix directory separators on Windows
 18864  -   drive: fix rate limit exceeded errors
 18865  
 18866  
 18867  v1.08 - 2015-02-04
 18868  
 18869  -   drive: fix subdirectory listing to not list entire drive
 18870  -   drive: Fix SetModTime
 18871  -   dropbox: adapt code to recent library changes
 18872  
 18873  
 18874  v1.07 - 2014-12-23
 18875  
 18876  -   google cloud storage: fix memory leak
 18877  
 18878  
 18879  v1.06 - 2014-12-12
 18880  
 18881  -   Fix “Couldn’t find home directory” on OSX
 18882  -   swift: Add tenant parameter
 18883  -   Use new location of Google API packages
 18884  
 18885  
 18886  v1.05 - 2014-08-09
 18887  
 18888  -   Improved tests and consequently lots of minor fixes
 18889  -   core: Fix race detected by go race detector
 18890  -   core: Fixes after running errcheck
 18891  -   drive: reset root directory on Rmdir and Purge
 18892  -   fs: Document that Purger returns error on empty directory, test and
 18893      fix
 18894  -   google cloud storage: fix ListDir on subdirectory
 18895  -   google cloud storage: re-read metadata in SetModTime
 18896  -   s3: make reading metadata more reliable to work around eventual
 18897      consistency problems
 18898  -   s3: strip trailing / from ListDir()
 18899  -   swift: return directories without / in ListDir
 18900  
 18901  
 18902  v1.04 - 2014-07-21
 18903  
 18904  -   google cloud storage: Fix crash on Update
 18905  
 18906  
 18907  v1.03 - 2014-07-20
 18908  
 18909  -   swift, s3, dropbox: fix updated files being marked as corrupted
 18910  -   Make compile with go 1.1 again
 18911  
 18912  
 18913  v1.02 - 2014-07-19
 18914  
 18915  -   Implement Dropbox remote
 18916  -   Implement Google Cloud Storage remote
 18917  -   Verify Md5sums and Sizes after copies
 18918  -   Remove times from “ls” command - lists sizes only
 18919  -   Add add “lsl” - lists times and sizes
 18920  -   Add “md5sum” command
 18921  
 18922  
 18923  v1.01 - 2014-07-04
 18924  
 18925  -   drive: fix transfer of big files using up lots of memory
 18926  
 18927  
 18928  v1.00 - 2014-07-03
 18929  
 18930  -   drive: fix whole second dates
 18931  
 18932  
 18933  v0.99 - 2014-06-26
 18934  
 18935  -   Fix –dry-run not working
 18936  -   Make compatible with go 1.1
 18937  
 18938  
 18939  v0.98 - 2014-05-30
 18940  
 18941  -   s3: Treat missing Content-Length as 0 for some ceph installations
 18942  -   rclonetest: add file with a space in
 18943  
 18944  
 18945  v0.97 - 2014-05-05
 18946  
 18947  -   Implement copying of single files
 18948  -   s3 & swift: support paths inside containers/buckets
 18949  
 18950  
 18951  v0.96 - 2014-04-24
 18952  
 18953  -   drive: Fix multiple files of same name being created
 18954  -   drive: Use o.Update and fs.Put to optimise transfers
 18955  -   Add version number, -V and –version
 18956  
 18957  
 18958  v0.95 - 2014-03-28
 18959  
 18960  -   rclone.org: website, docs and graphics
 18961  -   drive: fix path parsing
 18962  
 18963  
 18964  v0.94 - 2014-03-27
 18965  
 18966  -   Change remote format one last time
 18967  -   GNU style flags
 18968  
 18969  
 18970  v0.93 - 2014-03-16
 18971  
 18972  -   drive: store token in config file
 18973  -   cross compile other versions
 18974  -   set strict permissions on config file
 18975  
 18976  
 18977  v0.92 - 2014-03-15
 18978  
 18979  -   Config fixes and –config option
 18980  
 18981  
 18982  v0.91 - 2014-03-15
 18983  
 18984  -   Make config file
 18985  
 18986  
 18987  v0.90 - 2013-06-27
 18988  
 18989  -   Project named rclone
 18990  
 18991  
 18992  v0.00 - 2012-11-18
 18993  
 18994  -   Project started
 18995  
 18996  
 18997  Bugs and Limitations
 18998  
 18999  Empty directories are left behind / not created
 19000  
 19001  With remotes that have a concept of directory, eg Local and Drive, empty
 19002  directories may be left behind, or not created when one was expected.
 19003  
 19004  This is because rclone doesn’t have a concept of a directory - it only
 19005  works on objects. Most of the object storage systems can’t actually
 19006  store a directory so there is nowhere for rclone to store anything about
 19007  directories.
 19008  
 19009  You can work round this to some extent with thepurge command which will
 19010  delete everything under the path, INLUDING empty directories.
 19011  
 19012  This may be fixed at some point in Issue #100
 19013  
 19014  Directory timestamps aren’t preserved
 19015  
 19016  For the same reason as the above, rclone doesn’t have a concept of a
 19017  directory - it only works on objects, therefore it can’t preserve the
 19018  timestamps of directories.
 19019  
 19020  
 19021  Frequently Asked Questions
 19022  
 19023  Do all cloud storage systems support all rclone commands
 19024  
 19025  Yes they do. All the rclone commands (eg sync, copy etc) will work on
 19026  all the remote storage systems.
 19027  
 19028  Can I copy the config from one machine to another
 19029  
 19030  Sure! Rclone stores all of its config in a single file. If you want to
 19031  find this file, run rclone config file which will tell you where it is.
 19032  
 19033  See the remote setup docs for more info.
 19034  
 19035  How do I configure rclone on a remote / headless box with no browser?
 19036  
 19037  This has now been documented in its own remote setup page.
 19038  
 19039  Can rclone sync directly from drive to s3
 19040  
 19041  Rclone can sync between two remote cloud storage systems just fine.
 19042  
 19043  Note that it effectively downloads the file and uploads it again, so the
 19044  node running rclone would need to have lots of bandwidth.
 19045  
 19046  The syncs would be incremental (on a file by file basis).
 19047  
 19048  Eg
 19049  
 19050      rclone sync drive:Folder s3:bucket
 19051  
 19052  Using rclone from multiple locations at the same time
 19053  
 19054  You can use rclone from multiple places at the same time if you choose
 19055  different subdirectory for the output, eg
 19056  
 19057      Server A> rclone sync /tmp/whatever remote:ServerA
 19058      Server B> rclone sync /tmp/whatever remote:ServerB
 19059  
 19060  If you sync to the same directory then you should use rclone copy
 19061  otherwise the two rclones may delete each others files, eg
 19062  
 19063      Server A> rclone copy /tmp/whatever remote:Backup
 19064      Server B> rclone copy /tmp/whatever remote:Backup
 19065  
 19066  The file names you upload from Server A and Server B should be different
 19067  in this case, otherwise some file systems (eg Drive) may make
 19068  duplicates.
 19069  
 19070  Why doesn’t rclone support partial transfers / binary diffs like rsync?
 19071  
 19072  Rclone stores each file you transfer as a native object on the remote
 19073  cloud storage system. This means that you can see the files you upload
 19074  as expected using alternative access methods (eg using the Google Drive
 19075  web interface). There is a 1:1 mapping between files on your hard disk
 19076  and objects created in the cloud storage system.
 19077  
 19078  Cloud storage systems (at least none I’ve come across yet) don’t support
 19079  partially uploading an object. You can’t take an existing object, and
 19080  change some bytes in the middle of it.
 19081  
 19082  It would be possible to make a sync system which stored binary diffs
 19083  instead of whole objects like rclone does, but that would break the 1:1
 19084  mapping of files on your hard disk to objects in the remote cloud
 19085  storage system.
 19086  
 19087  All the cloud storage systems support partial downloads of content, so
 19088  it would be possible to make partial downloads work. However to make
 19089  this work efficiently this would require storing a significant amount of
 19090  metadata, which breaks the desired 1:1 mapping of files to objects.
 19091  
 19092  Can rclone do bi-directional sync?
 19093  
 19094  No, not at present. rclone only does uni-directional sync from A -> B.
 19095  It may do in the future though since it has all the primitives - it just
 19096  requires writing the algorithm to do it.
 19097  
 19098  Can I use rclone with an HTTP proxy?
 19099  
 19100  Yes. rclone will follow the standard environment variables for proxies,
 19101  similar to cURL and other programs.
 19102  
 19103  In general the variables are called http_proxy (for services reached
 19104  over http) and https_proxy (for services reached over https). Most
 19105  public services will be using https, but you may wish to set both.
 19106  
 19107  The content of the variable is protocol://server:port. The protocol
 19108  value is the one used to talk to the proxy server, itself, and is
 19109  commonly either http or socks5.
 19110  
 19111  Slightly annoyingly, there is no _standard_ for the name; some
 19112  applications may use http_proxy but another one HTTP_PROXY. The Go
 19113  libraries used by rclone will try both variations, but you may wish to
 19114  set all possibilities. So, on Linux, you may end up with code similar to
 19115  
 19116      export http_proxy=http://proxyserver:12345
 19117      export https_proxy=$http_proxy
 19118      export HTTP_PROXY=$http_proxy
 19119      export HTTPS_PROXY=$http_proxy
 19120  
 19121  The NO_PROXY allows you to disable the proxy for specific hosts. Hosts
 19122  must be comma separated, and can contain domains or parts. For instance
 19123  “foo.com” also matches “bar.foo.com”.
 19124  
 19125  e.g.
 19126  
 19127      export no_proxy=localhost,127.0.0.0/8,my.host.name
 19128      export NO_PROXY=$no_proxy
 19129  
 19130  Note that the ftp backend does not support ftp_proxy yet.
 19131  
 19132  Rclone gives x509: failed to load system roots and no roots provided error
 19133  
 19134  This means that rclone can’t file the SSL root certificates. Likely you
 19135  are running rclone on a NAS with a cut-down Linux OS, or possibly on
 19136  Solaris.
 19137  
 19138  Rclone (via the Go runtime) tries to load the root certificates from
 19139  these places on Linux.
 19140  
 19141      "/etc/ssl/certs/ca-certificates.crt", // Debian/Ubuntu/Gentoo etc.
 19142      "/etc/pki/tls/certs/ca-bundle.crt",   // Fedora/RHEL
 19143      "/etc/ssl/ca-bundle.pem",             // OpenSUSE
 19144      "/etc/pki/tls/cacert.pem",            // OpenELEC
 19145  
 19146  So doing something like this should fix the problem. It also sets the
 19147  time which is important for SSL to work properly.
 19148  
 19149      mkdir -p /etc/ssl/certs/
 19150      curl -o /etc/ssl/certs/ca-certificates.crt https://raw.githubusercontent.com/bagder/ca-bundle/master/ca-bundle.crt
 19151      ntpclient -s -h pool.ntp.org
 19152  
 19153  The two environment variables SSL_CERT_FILE and SSL_CERT_DIR, mentioned
 19154  in the x509 pacakge, provide an additional way to provide the SSL root
 19155  certificates.
 19156  
 19157  Note that you may need to add the --insecure option to the curl command
 19158  line if it doesn’t work without.
 19159  
 19160      curl --insecure -o /etc/ssl/certs/ca-certificates.crt https://raw.githubusercontent.com/bagder/ca-bundle/master/ca-bundle.crt
 19161  
 19162  Rclone gives Failed to load config file: function not implemented error
 19163  
 19164  Likely this means that you are running rclone on Linux version not
 19165  supported by the go runtime, ie earlier than version 2.6.23.
 19166  
 19167  See the system requirements section in the go install docs for full
 19168  details.
 19169  
 19170  All my uploaded docx/xlsx/pptx files appear as archive/zip
 19171  
 19172  This is caused by uploading these files from a Windows computer which
 19173  hasn’t got the Microsoft Office suite installed. The easiest way to fix
 19174  is to install the Word viewer and the Microsoft Office Compatibility
 19175  Pack for Word, Excel, and PowerPoint 2007 and later versions’ file
 19176  formats
 19177  
 19178  tcp lookup some.domain.com no such host
 19179  
 19180  This happens when rclone cannot resolve a domain. Please check that your
 19181  DNS setup is generally working, e.g.
 19182  
 19183      # both should print a long list of possible IP addresses
 19184      dig www.googleapis.com          # resolve using your default DNS
 19185      dig www.googleapis.com @8.8.8.8 # resolve with Google's DNS server
 19186  
 19187  If you are using systemd-resolved (default on Arch Linux), ensure it is
 19188  at version 233 or higher. Previous releases contain a bug which causes
 19189  not all domains to be resolved properly.
 19190  
 19191  Additionally with the GODEBUG=netdns= environment variable the Go
 19192  resolver decision can be influenced. This also allows to resolve certain
 19193  issues with DNS resolution. See the name resolution section in the go
 19194  docs.
 19195  
 19196  The total size reported in the stats for a sync is wrong and keeps changing
 19197  
 19198  It is likely you have more than 10,000 files that need to be synced. By
 19199  default rclone only gets 10,000 files ahead in a sync so as not to use
 19200  up too much memory. You can change this default with the –max-backlog
 19201  flag.
 19202  
 19203  Rclone is using too much memory or appears to have a memory leak
 19204  
 19205  Rclone is written in Go which uses a garbage collector. The default
 19206  settings for the garbage collector mean that it runs when the heap size
 19207  has doubled.
 19208  
 19209  However it is possible to tune the garbage collector to use less memory
 19210  by setting GOGC to a lower value, say export GOGC=20. This will make the
 19211  garbage collector work harder, reducing memory size at the expense of
 19212  CPU usage.
 19213  
 19214  The most common cause of rclone using lots of memory is a single
 19215  directory with thousands or millions of files in. Rclone has to load
 19216  this entirely into memory as rclone objects. Each Rclone object takes
 19217  0.5k-1k of memory.
 19218  
 19219  
 19220  License
 19221  
 19222  This is free software under the terms of MIT the license (check the
 19223  COPYING file included with the source code).
 19224  
 19225      Copyright (C) 2012 by Nick Craig-Wood https://www.craig-wood.com/nick/
 19226  
 19227      Permission is hereby granted, free of charge, to any person obtaining a copy
 19228      of this software and associated documentation files (the "Software"), to deal
 19229      in the Software without restriction, including without limitation the rights
 19230      to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
 19231      copies of the Software, and to permit persons to whom the Software is
 19232      furnished to do so, subject to the following conditions:
 19233  
 19234      The above copyright notice and this permission notice shall be included in
 19235      all copies or substantial portions of the Software.
 19236  
 19237      THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
 19238      IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
 19239      FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
 19240      AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
 19241      LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
 19242      OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
 19243      THE SOFTWARE.
 19244  
 19245  
 19246  Authors
 19247  
 19248  -   Nick Craig-Wood nick@craig-wood.com
 19249  
 19250  
 19251  Contributors
 19252  
 19253  -   Alex Couper amcouper@gmail.com
 19254  -   Leonid Shalupov leonid@shalupov.com shalupov@diverse.org.ru
 19255  -   Shimon Doodkin helpmepro1@gmail.com
 19256  -   Colin Nicholson colin@colinn.com
 19257  -   Klaus Post klauspost@gmail.com
 19258  -   Sergey Tolmachev tolsi.ru@gmail.com
 19259  -   Adriano Aurélio Meirelles adriano@atinge.com
 19260  -   C. Bess cbess@users.noreply.github.com
 19261  -   Dmitry Burdeev dibu28@gmail.com
 19262  -   Joseph Spurrier github@josephspurrier.com
 19263  -   Björn Harrtell bjorn@wololo.org
 19264  -   Xavier Lucas xavier.lucas@corp.ovh.com
 19265  -   Werner Beroux werner@beroux.com
 19266  -   Brian Stengaard brian@stengaard.eu
 19267  -   Jakub Gedeon jgedeon@sofi.com
 19268  -   Jim Tittsler jwt@onjapan.net
 19269  -   Michal Witkowski michal@improbable.io
 19270  -   Fabian Ruff fabian.ruff@sap.com
 19271  -   Leigh Klotz klotz@quixey.com
 19272  -   Romain Lapray lapray.romain@gmail.com
 19273  -   Justin R. Wilson jrw972@gmail.com
 19274  -   Antonio Messina antonio.s.messina@gmail.com
 19275  -   Stefan G. Weichinger office@oops.co.at
 19276  -   Per Cederberg cederberg@gmail.com
 19277  -   Radek Šenfeld rush@logic.cz
 19278  -   Fredrik Fornwall fredrik@fornwall.net
 19279  -   Asko Tamm asko@deekit.net
 19280  -   xor-zz xor@gstocco.com
 19281  -   Tomasz Mazur tmazur90@gmail.com
 19282  -   Marco Paganini paganini@paganini.net
 19283  -   Felix Bünemann buenemann@louis.info
 19284  -   Durval Menezes jmrclone@durval.com
 19285  -   Luiz Carlos Rumbelsperger Viana maxd13_luiz_carlos@hotmail.com
 19286  -   Stefan Breunig stefan-github@yrden.de
 19287  -   Alishan Ladhani ali-l@users.noreply.github.com
 19288  -   0xJAKE 0xJAKE@users.noreply.github.com
 19289  -   Thibault Molleman thibaultmol@users.noreply.github.com
 19290  -   Scott McGillivray scott.mcgillivray@gmail.com
 19291  -   Bjørn Erik Pedersen bjorn.erik.pedersen@gmail.com
 19292  -   Lukas Loesche lukas@mesosphere.io
 19293  -   emyarod allllaboutyou@gmail.com
 19294  -   T.C. Ferguson tcf909@gmail.com
 19295  -   Brandur brandur@mutelight.org
 19296  -   Dario Giovannetti dev@dariogiovannetti.net
 19297  -   Károly Oláh okaresz@aol.com
 19298  -   Jon Yergatian jon@macfanatic.ca
 19299  -   Jack Schmidt github@mowsey.org
 19300  -   Dedsec1 Dedsec1@users.noreply.github.com
 19301  -   Hisham Zarka hzarka@gmail.com
 19302  -   Jérôme Vizcaino jerome.vizcaino@gmail.com
 19303  -   Mike Tesch mjt6129@rit.edu
 19304  -   Marvin Watson marvwatson@users.noreply.github.com
 19305  -   Danny Tsai danny8376@gmail.com
 19306  -   Yoni Jah yonjah+git@gmail.com yonjah+github@gmail.com
 19307  -   Stephen Harris github@spuddy.org sweharris@users.noreply.github.com
 19308  -   Ihor Dvoretskyi ihor.dvoretskyi@gmail.com
 19309  -   Jon Craton jncraton@gmail.com
 19310  -   Hraban Luyat hraban@0brg.net
 19311  -   Michael Ledin mledin89@gmail.com
 19312  -   Martin Kristensen me@azgul.com
 19313  -   Too Much IO toomuchio@users.noreply.github.com
 19314  -   Anisse Astier anisse@astier.eu
 19315  -   Zahiar Ahmed zahiar@live.com
 19316  -   Igor Kharin igorkharin@gmail.com
 19317  -   Bill Zissimopoulos billziss@navimatics.com
 19318  -   Bob Potter bobby.potter@gmail.com
 19319  -   Steven Lu tacticalazn@gmail.com
 19320  -   Sjur Fredriksen sjurtf@ifi.uio.no
 19321  -   Ruwbin hubus12345@gmail.com
 19322  -   Fabian Möller fabianm88@gmail.com f.moeller@nynex.de
 19323  -   Edward Q. Bridges github@eqbridges.com
 19324  -   Vasiliy Tolstov v.tolstov@selfip.ru
 19325  -   Harshavardhana harsha@minio.io
 19326  -   sainaen sainaen@gmail.com
 19327  -   gdm85 gdm85@users.noreply.github.com
 19328  -   Yaroslav Halchenko debian@onerussian.com
 19329  -   John Papandriopoulos jpap@users.noreply.github.com
 19330  -   Zhiming Wang zmwangx@gmail.com
 19331  -   Andy Pilate cubox@cubox.me
 19332  -   Oliver Heyme olihey@googlemail.com olihey@users.noreply.github.com
 19333      de8olihe@lego.com
 19334  -   wuyu wuyu@yunify.com
 19335  -   Andrei Dragomir adragomi@adobe.com
 19336  -   Christian Brüggemann mail@cbruegg.com
 19337  -   Alex McGrath Kraak amkdude@gmail.com
 19338  -   bpicode bjoern.pirnay@googlemail.com
 19339  -   Daniel Jagszent daniel@jagszent.de
 19340  -   Josiah White thegenius2009@gmail.com
 19341  -   Ishuah Kariuki kariuki@ishuah.com ishuah91@gmail.com
 19342  -   Jan Varho jan@varho.org
 19343  -   Girish Ramakrishnan girish@cloudron.io
 19344  -   LingMan LingMan@users.noreply.github.com
 19345  -   Jacob McNamee jacobmcnamee@gmail.com
 19346  -   jersou jertux@gmail.com
 19347  -   thierry thierry@substantiel.fr
 19348  -   Simon Leinen simon.leinen@gmail.com ubuntu@s3-test.novalocal
 19349  -   Dan Dascalescu ddascalescu+github@gmail.com
 19350  -   Jason Rose jason@jro.io
 19351  -   Andrew Starr-Bochicchio a.starr.b@gmail.com
 19352  -   John Leach john@johnleach.co.uk
 19353  -   Corban Raun craun@instructure.com
 19354  -   Pierre Carlson mpcarl@us.ibm.com
 19355  -   Ernest Borowski er.borowski@gmail.com
 19356  -   Remus Bunduc remus.bunduc@gmail.com
 19357  -   Iakov Davydov iakov.davydov@unil.ch dav05.gith@myths.ru
 19358  -   Jakub Tasiemski tasiemski@gmail.com
 19359  -   David Minor dminor@saymedia.com
 19360  -   Tim Cooijmans cooijmans.tim@gmail.com
 19361  -   Laurence liuxy6@gmail.com
 19362  -   Giovanni Pizzi gio.piz@gmail.com
 19363  -   Filip Bartodziej filipbartodziej@gmail.com
 19364  -   Jon Fautley jon@dead.li
 19365  -   lewapm 32110057+lewapm@users.noreply.github.com
 19366  -   Yassine Imounachen yassine256@gmail.com
 19367  -   Chris Redekop chris-redekop@users.noreply.github.com
 19368      chris.redekop@gmail.com
 19369  -   Jon Fautley jon@adenoid.appstal.co.uk
 19370  -   Will Gunn WillGunn@users.noreply.github.com
 19371  -   Lucas Bremgartner lucas@bremis.ch
 19372  -   Jody Frankowski jody.frankowski@gmail.com
 19373  -   Andreas Roussos arouss1980@gmail.com
 19374  -   nbuchanan nbuchanan@utah.gov
 19375  -   Durval Menezes rclone@durval.com
 19376  -   Victor vb-github@viblo.se
 19377  -   Mateusz pabian.mateusz@gmail.com
 19378  -   Daniel Loader spicypixel@gmail.com
 19379  -   David0rk davidork@gmail.com
 19380  -   Alexander Neumann alexander@bumpern.de
 19381  -   Giri Badanahatti gbadanahatti@us.ibm.com@Giris-MacBook-Pro.local
 19382  -   Leo R. Lundgren leo@finalresort.org
 19383  -   wolfv wolfv6@users.noreply.github.com
 19384  -   Dave Pedu dave@davepedu.com
 19385  -   Stefan Lindblom lindblom@spotify.com
 19386  -   seuffert oliver@seuffert.biz
 19387  -   gbadanahatti 37121690+gbadanahatti@users.noreply.github.com
 19388  -   Keith Goldfarb barkofdelight@gmail.com
 19389  -   Steve Kriss steve@heptio.com
 19390  -   Chih-Hsuan Yen yan12125@gmail.com
 19391  -   Alexander Neumann fd0@users.noreply.github.com
 19392  -   Matt Holt mholt@users.noreply.github.com
 19393  -   Eri Bastos bastos.eri@gmail.com
 19394  -   Michael P. Dubner pywebmail@list.ru
 19395  -   Antoine GIRARD sapk@users.noreply.github.com
 19396  -   Mateusz Piotrowski mpp302@gmail.com
 19397  -   Animosity022 animosity22@users.noreply.github.com
 19398      earl.texter@gmail.com
 19399  -   Peter Baumgartner pete@lincolnloop.com
 19400  -   Craig Rachel craig@craigrachel.com
 19401  -   Michael G. Noll miguno@users.noreply.github.com
 19402  -   hensur me@hensur.de
 19403  -   Oliver Heyme de8olihe@lego.com
 19404  -   Richard Yang richard@yenforyang.com
 19405  -   Piotr Oleszczyk piotr.oleszczyk@gmail.com
 19406  -   Rodrigo rodarima@gmail.com
 19407  -   NoLooseEnds NoLooseEnds@users.noreply.github.com
 19408  -   Jakub Karlicek jakub@karlicek.me
 19409  -   John Clayton john@codemonkeylabs.com
 19410  -   Kasper Byrdal Nielsen byrdal76@gmail.com
 19411  -   Benjamin Joseph Dag bjdag1234@users.noreply.github.com
 19412  -   themylogin themylogin@gmail.com
 19413  -   Onno Zweers onno.zweers@surfsara.nl
 19414  -   Jasper Lievisse Adriaanse jasper@humppa.nl
 19415  -   sandeepkru sandeep.ummadi@gmail.com
 19416      sandeepkru@users.noreply.github.com
 19417  -   HerrH atomtigerzoo@users.noreply.github.com
 19418  -   Andrew 4030760+sparkyman215@users.noreply.github.com
 19419  -   dan smith XX1011@gmail.com
 19420  -   Oleg Kovalov iamolegkovalov@gmail.com
 19421  -   Ruben Vandamme github-com-00ff86@vandamme.email
 19422  -   Cnly minecnly@gmail.com
 19423  -   Andres Alvarez 1671935+kir4h@users.noreply.github.com
 19424  -   reddi1 xreddi@gmail.com
 19425  -   Matt Tucker matthewtckr@gmail.com
 19426  -   Sebastian Bünger buengese@gmail.com
 19427  -   Martin Polden mpolden@mpolden.no
 19428  -   Alex Chen Cnly@users.noreply.github.com
 19429  -   Denis deniskovpen@gmail.com
 19430  -   bsteiss 35940619+bsteiss@users.noreply.github.com
 19431  -   Cédric Connes cedric.connes@gmail.com
 19432  -   Dr. Tobias Quathamer toddy15@users.noreply.github.com
 19433  -   dcpu 42736967+dcpu@users.noreply.github.com
 19434  -   Sheldon Rupp me@shel.io
 19435  -   albertony 12441419+albertony@users.noreply.github.com
 19436  -   cron410 cron410@gmail.com
 19437  -   Anagh Kumar Baranwal anaghk.dos@gmail.com
 19438  -   Felix Brucker felix@felixbrucker.com
 19439  -   Santiago Rodríguez scollazo@users.noreply.github.com
 19440  -   Craig Miskell craig.miskell@fluxfederation.com
 19441  -   Antoine GIRARD sapk@sapk.fr
 19442  -   Joanna Marek joanna.marek@u2i.com
 19443  -   frenos frenos@users.noreply.github.com
 19444  -   ssaqua ssaqua@users.noreply.github.com
 19445  -   xnaas me@xnaas.info
 19446  -   Frantisek Fuka fuka@fuxoft.cz
 19447  -   Paul Kohout pauljkohout@yahoo.com
 19448  -   dcpu 43330287+dcpu@users.noreply.github.com
 19449  -   jackyzy823 jackyzy823@gmail.com
 19450  -   David Haguenauer ml@kurokatta.org
 19451  -   teresy hi.teresy@gmail.com
 19452  -   buergi patbuergi@gmx.de
 19453  -   Florian Gamboeck mail@floga.de
 19454  -   Ralf Hemberger 10364191+rhemberger@users.noreply.github.com
 19455  -   Scott Edlund sedlund@users.noreply.github.com
 19456  -   Erik Swanson erik@retailnext.net
 19457  -   Jake Coggiano jake@stripe.com
 19458  -   brused27 brused27@noemailaddress
 19459  -   Peter Kaminski kaminski@istori.com
 19460  -   Henry Ptasinski henry@logout.com
 19461  -   Alexander kharkovalexander@gmail.com
 19462  -   Garry McNulty garrmcnu@gmail.com
 19463  -   Mathieu Carbou mathieu.carbou@gmail.com
 19464  -   Mark Otway mark@otway.com
 19465  -   William Cocker 37018962+WilliamCocker@users.noreply.github.com
 19466  -   François Leurent 131.js@cloudyks.org
 19467  -   Arkadius Stefanski arkste@gmail.com
 19468  -   Jay dev@jaygoel.com
 19469  -   andrea rota a@xelera.eu
 19470  -   nicolov nicolov@users.noreply.github.com
 19471  -   Dario Guzik dario@guzik.com.ar
 19472  -   qip qip@users.noreply.github.com
 19473  -   yair@unicorn yair@unicorn
 19474  -   Matt Robinson brimstone@the.narro.ws
 19475  -   kayrus kay.diam@gmail.com
 19476  -   Rémy Léone remy.leone@gmail.com
 19477  -   Wojciech Smigielski wojciech.hieronim.smigielski@gmail.com
 19478  -   weetmuts oehrstroem@gmail.com
 19479  -   Jonathan vanillajonathan@users.noreply.github.com
 19480  -   James Carpenter orbsmiv@users.noreply.github.com
 19481  -   Vince vince0villamora@gmail.com
 19482  -   Nestar47 47841759+Nestar47@users.noreply.github.com
 19483  -   Six brbsix@gmail.com
 19484  -   Alexandru Bumbacea alexandru.bumbacea@booking.com
 19485  -   calisro robert.calistri@gmail.com
 19486  -   Dr.Rx david.rey@nventive.com
 19487  -   marcintustin marcintustin@users.noreply.github.com
 19488  -   jaKa Močnik jaka@koofr.net
 19489  -   Fionera fionera@fionera.de
 19490  -   Dan Walters dan@walters.io
 19491  -   Danil Semelenov sgtpep@users.noreply.github.com
 19492  -   xopez 28950736+xopez@users.noreply.github.com
 19493  -   Ben Boeckel mathstuf@gmail.com
 19494  -   Manu manu@snapdragon.cc
 19495  -   Kyle E. Mitchell kyle@kemitchell.com
 19496  -   Gary Kim gary@garykim.dev
 19497  -   Jon jonathn@github.com
 19498  -   Jeff Quinn jeffrey.quinn@bluevoyant.com
 19499  -   Peter Berbec peter@berbec.com
 19500  -   didil 1284255+didil@users.noreply.github.com
 19501  -   id01 gaviniboom@gmail.com
 19502  -   Robert Marko robimarko@gmail.com
 19503  -   Philip Harvey 32467456+pharveybattelle@users.noreply.github.com
 19504  -   JorisE JorisE@users.noreply.github.com
 19505  -   garry415 garry.415@gmail.com
 19506  -   forgems forgems@gmail.com
 19507  -   Florian Apolloner florian@apolloner.eu
 19508  -   Aleksandar Jankovic office@ajankovic.com
 19509  
 19510  
 19511  
 19512  CONTACT THE RCLONE PROJECT
 19513  
 19514  
 19515  Forum
 19516  
 19517  Forum for questions and general discussion:
 19518  
 19519  -   https://forum.rclone.org
 19520  
 19521  
 19522  Gitub project
 19523  
 19524  The project website is at:
 19525  
 19526  -   https://github.com/ncw/rclone
 19527  
 19528  There you can file bug reports or contribute pull requests.
 19529  
 19530  
 19531  Twitter
 19532  
 19533  You can also follow me on twitter for rclone announcements:
 19534  
 19535  -   [@njcw](https://twitter.com/njcw)
 19536  
 19537  
 19538  Email
 19539  
 19540  Or if all else fails or you want to ask something private or
 19541  confidential email Nick Craig-Wood