github.com/xhghs/rclone@v1.51.1-0.20200430155106-e186a28cced8/docs/content/amazonclouddrive.md (about) 1 --- 2 title: "Amazon Drive" 3 description: "Rclone docs for Amazon Drive" 4 date: "2017-06-10" 5 --- 6 7 <i class="fab fa-amazon"></i> Amazon Drive 8 ----------------------------------------- 9 10 Amazon Drive, formerly known as Amazon Cloud Drive, is a cloud storage 11 service run by Amazon for consumers. 12 13 ## Status 14 15 **Important:** rclone supports Amazon Drive only if you have your own 16 set of API keys. Unfortunately the [Amazon Drive developer 17 program](https://developer.amazon.com/amazon-drive) is now closed to 18 new entries so if you don't already have your own set of keys you will 19 not be able to use rclone with Amazon Drive. 20 21 For the history on why rclone no longer has a set of Amazon Drive API 22 keys see [the forum](https://forum.rclone.org/t/rclone-has-been-banned-from-amazon-drive/2314). 23 24 If you happen to know anyone who works at Amazon then please ask them 25 to re-instate rclone into the Amazon Drive developer program - thanks! 26 27 ## Setup 28 29 The initial setup for Amazon Drive involves getting a token from 30 Amazon which you need to do in your browser. `rclone config` walks 31 you through it. 32 33 The configuration process for Amazon Drive may involve using an [oauth 34 proxy](https://github.com/ncw/oauthproxy). This is used to keep the 35 Amazon credentials out of the source code. The proxy runs in Google's 36 very secure App Engine environment and doesn't store any credentials 37 which pass through it. 38 39 Since rclone doesn't currently have its own Amazon Drive credentials 40 so you will either need to have your own `client_id` and 41 `client_secret` with Amazon Drive, or use a a third party ouath proxy 42 in which case you will need to enter `client_id`, `client_secret`, 43 `auth_url` and `token_url`. 44 45 Note also if you are not using Amazon's `auth_url` and `token_url`, 46 (ie you filled in something for those) then if setting up on a remote 47 machine you can only use the [copying the config method of 48 configuration](https://rclone.org/remote_setup/#configuring-by-copying-the-config-file) 49 - `rclone authorize` will not work. 50 51 Here is an example of how to make a remote called `remote`. First run: 52 53 rclone config 54 55 This will guide you through an interactive setup process: 56 57 ``` 58 No remotes found - make a new one 59 n) New remote 60 r) Rename remote 61 c) Copy remote 62 s) Set configuration password 63 q) Quit config 64 n/r/c/s/q> n 65 name> remote 66 Type of storage to configure. 67 Choose a number from below, or type in your own value 68 [snip] 69 XX / Amazon Drive 70 \ "amazon cloud drive" 71 [snip] 72 Storage> amazon cloud drive 73 Amazon Application Client Id - required. 74 client_id> your client ID goes here 75 Amazon Application Client Secret - required. 76 client_secret> your client secret goes here 77 Auth server URL - leave blank to use Amazon's. 78 auth_url> Optional auth URL 79 Token server url - leave blank to use Amazon's. 80 token_url> Optional token URL 81 Remote config 82 Make sure your Redirect URL is set to "http://127.0.0.1:53682/" in your custom config. 83 Use auto config? 84 * Say Y if not sure 85 * Say N if you are working on a remote or headless machine 86 y) Yes 87 n) No 88 y/n> y 89 If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth 90 Log in and authorize rclone for access 91 Waiting for code... 92 Got code 93 -------------------- 94 [remote] 95 client_id = your client ID goes here 96 client_secret = your client secret goes here 97 auth_url = Optional auth URL 98 token_url = Optional token URL 99 token = {"access_token":"xxxxxxxxxxxxxxxxxxxxxxx","token_type":"bearer","refresh_token":"xxxxxxxxxxxxxxxxxx","expiry":"2015-09-06T16:07:39.658438471+01:00"} 100 -------------------- 101 y) Yes this is OK 102 e) Edit this remote 103 d) Delete this remote 104 y/e/d> y 105 ``` 106 107 See the [remote setup docs](/remote_setup/) for how to set it up on a 108 machine with no Internet browser available. 109 110 Note that rclone runs a webserver on your local machine to collect the 111 token as returned from Amazon. This only runs from the moment it 112 opens your browser to the moment you get back the verification 113 code. This is on `http://127.0.0.1:53682/` and this it may require 114 you to unblock it temporarily if you are running a host firewall. 115 116 Once configured you can then use `rclone` like this, 117 118 List directories in top level of your Amazon Drive 119 120 rclone lsd remote: 121 122 List all the files in your Amazon Drive 123 124 rclone ls remote: 125 126 To copy a local directory to an Amazon Drive directory called backup 127 128 rclone copy /home/source remote:backup 129 130 ### Modified time and MD5SUMs ### 131 132 Amazon Drive doesn't allow modification times to be changed via 133 the API so these won't be accurate or used for syncing. 134 135 It does store MD5SUMs so for a more accurate sync, you can use the 136 `--checksum` flag. 137 138 #### Restricted filename characters 139 140 | Character | Value | Replacement | 141 | --------- |:-----:|:-----------:| 142 | NUL | 0x00 | ␀ | 143 | / | 0x2F | / | 144 145 Invalid UTF-8 bytes will also be [replaced](/overview/#invalid-utf8), 146 as they can't be used in JSON strings. 147 148 ### Deleting files ### 149 150 Any files you delete with rclone will end up in the trash. Amazon 151 don't provide an API to permanently delete files, nor to empty the 152 trash, so you will have to do that with one of Amazon's apps or via 153 the Amazon Drive website. As of November 17, 2016, files are 154 automatically deleted by Amazon from the trash after 30 days. 155 156 ### Using with non `.com` Amazon accounts ### 157 158 Let's say you usually use `amazon.co.uk`. When you authenticate with 159 rclone it will take you to an `amazon.com` page to log in. Your 160 `amazon.co.uk` email and password should work here just fine. 161 162 <!--- autogenerated options start - DO NOT EDIT, instead edit fs.RegInfo in backend/amazonclouddrive/amazonclouddrive.go then run make backenddocs --> 163 ### Standard Options 164 165 Here are the standard options specific to amazon cloud drive (Amazon Drive). 166 167 #### --acd-client-id 168 169 Amazon Application Client ID. 170 171 - Config: client_id 172 - Env Var: RCLONE_ACD_CLIENT_ID 173 - Type: string 174 - Default: "" 175 176 #### --acd-client-secret 177 178 Amazon Application Client Secret. 179 180 - Config: client_secret 181 - Env Var: RCLONE_ACD_CLIENT_SECRET 182 - Type: string 183 - Default: "" 184 185 ### Advanced Options 186 187 Here are the advanced options specific to amazon cloud drive (Amazon Drive). 188 189 #### --acd-auth-url 190 191 Auth server URL. 192 Leave blank to use Amazon's. 193 194 - Config: auth_url 195 - Env Var: RCLONE_ACD_AUTH_URL 196 - Type: string 197 - Default: "" 198 199 #### --acd-token-url 200 201 Token server url. 202 leave blank to use Amazon's. 203 204 - Config: token_url 205 - Env Var: RCLONE_ACD_TOKEN_URL 206 - Type: string 207 - Default: "" 208 209 #### --acd-checkpoint 210 211 Checkpoint for internal polling (debug). 212 213 - Config: checkpoint 214 - Env Var: RCLONE_ACD_CHECKPOINT 215 - Type: string 216 - Default: "" 217 218 #### --acd-upload-wait-per-gb 219 220 Additional time per GB to wait after a failed complete upload to see if it appears. 221 222 Sometimes Amazon Drive gives an error when a file has been fully 223 uploaded but the file appears anyway after a little while. This 224 happens sometimes for files over 1GB in size and nearly every time for 225 files bigger than 10GB. This parameter controls the time rclone waits 226 for the file to appear. 227 228 The default value for this parameter is 3 minutes per GB, so by 229 default it will wait 3 minutes for every GB uploaded to see if the 230 file appears. 231 232 You can disable this feature by setting it to 0. This may cause 233 conflict errors as rclone retries the failed upload but the file will 234 most likely appear correctly eventually. 235 236 These values were determined empirically by observing lots of uploads 237 of big files for a range of file sizes. 238 239 Upload with the "-v" flag to see more info about what rclone is doing 240 in this situation. 241 242 - Config: upload_wait_per_gb 243 - Env Var: RCLONE_ACD_UPLOAD_WAIT_PER_GB 244 - Type: Duration 245 - Default: 3m0s 246 247 #### --acd-templink-threshold 248 249 Files >= this size will be downloaded via their tempLink. 250 251 Files this size or more will be downloaded via their "tempLink". This 252 is to work around a problem with Amazon Drive which blocks downloads 253 of files bigger than about 10GB. The default for this is 9GB which 254 shouldn't need to be changed. 255 256 To download files above this threshold, rclone requests a "tempLink" 257 which downloads the file through a temporary URL directly from the 258 underlying S3 storage. 259 260 - Config: templink_threshold 261 - Env Var: RCLONE_ACD_TEMPLINK_THRESHOLD 262 - Type: SizeSuffix 263 - Default: 9G 264 265 #### --acd-encoding 266 267 This sets the encoding for the backend. 268 269 See: the [encoding section in the overview](/overview/#encoding) for more info. 270 271 - Config: encoding 272 - Env Var: RCLONE_ACD_ENCODING 273 - Type: MultiEncoder 274 - Default: Slash,InvalidUtf8,Dot 275 276 <!--- autogenerated options stop --> 277 278 ### Limitations ### 279 280 Note that Amazon Drive is case insensitive so you can't have a 281 file called "Hello.doc" and one called "hello.doc". 282 283 Amazon Drive has rate limiting so you may notice errors in the 284 sync (429 errors). rclone will automatically retry the sync up to 3 285 times by default (see `--retries` flag) which should hopefully work 286 around this problem. 287 288 Amazon Drive has an internal limit of file sizes that can be uploaded 289 to the service. This limit is not officially published, but all files 290 larger than this will fail. 291 292 At the time of writing (Jan 2016) is in the area of 50GB per file. 293 This means that larger files are likely to fail. 294 295 Unfortunately there is no way for rclone to see that this failure is 296 because of file size, so it will retry the operation, as any other 297 failure. To avoid this problem, use `--max-size 50000M` option to limit 298 the maximum size of uploaded files. Note that `--max-size` does not split 299 files into segments, it only ignores files over this size.