github.com/10XDev/rclone@v1.52.3-0.20200626220027-16af9ab76b2a/docs/content/amazonclouddrive.md (about) 1 --- 2 title: "Amazon Drive" 3 description: "Rclone docs for Amazon Drive" 4 --- 5 6 {{< icon "fab fa-amazon" >}} Amazon Drive 7 ----------------------------------------- 8 9 Amazon Drive, formerly known as Amazon Cloud Drive, is a cloud storage 10 service run by Amazon for consumers. 11 12 ## Status 13 14 **Important:** rclone supports Amazon Drive only if you have your own 15 set of API keys. Unfortunately the [Amazon Drive developer 16 program](https://developer.amazon.com/amazon-drive) is now closed to 17 new entries so if you don't already have your own set of keys you will 18 not be able to use rclone with Amazon Drive. 19 20 For the history on why rclone no longer has a set of Amazon Drive API 21 keys see [the forum](https://forum.rclone.org/t/rclone-has-been-banned-from-amazon-drive/2314). 22 23 If you happen to know anyone who works at Amazon then please ask them 24 to re-instate rclone into the Amazon Drive developer program - thanks! 25 26 ## Setup 27 28 The initial setup for Amazon Drive involves getting a token from 29 Amazon which you need to do in your browser. `rclone config` walks 30 you through it. 31 32 The configuration process for Amazon Drive may involve using an [oauth 33 proxy](https://github.com/ncw/oauthproxy). This is used to keep the 34 Amazon credentials out of the source code. The proxy runs in Google's 35 very secure App Engine environment and doesn't store any credentials 36 which pass through it. 37 38 Since rclone doesn't currently have its own Amazon Drive credentials 39 so you will either need to have your own `client_id` and 40 `client_secret` with Amazon Drive, or use a third party oauth proxy 41 in which case you will need to enter `client_id`, `client_secret`, 42 `auth_url` and `token_url`. 43 44 Note also if you are not using Amazon's `auth_url` and `token_url`, 45 (ie you filled in something for those) then if setting up on a remote 46 machine you can only use the [copying the config method of 47 configuration](https://rclone.org/remote_setup/#configuring-by-copying-the-config-file) 48 - `rclone authorize` will not work. 49 50 Here is an example of how to make a remote called `remote`. First run: 51 52 rclone config 53 54 This will guide you through an interactive setup process: 55 56 ``` 57 No remotes found - make a new one 58 n) New remote 59 r) Rename remote 60 c) Copy remote 61 s) Set configuration password 62 q) Quit config 63 n/r/c/s/q> n 64 name> remote 65 Type of storage to configure. 66 Choose a number from below, or type in your own value 67 [snip] 68 XX / Amazon Drive 69 \ "amazon cloud drive" 70 [snip] 71 Storage> amazon cloud drive 72 Amazon Application Client Id - required. 73 client_id> your client ID goes here 74 Amazon Application Client Secret - required. 75 client_secret> your client secret goes here 76 Auth server URL - leave blank to use Amazon's. 77 auth_url> Optional auth URL 78 Token server url - leave blank to use Amazon's. 79 token_url> Optional token URL 80 Remote config 81 Make sure your Redirect URL is set to "http://127.0.0.1:53682/" in your custom config. 82 Use auto config? 83 * Say Y if not sure 84 * Say N if you are working on a remote or headless machine 85 y) Yes 86 n) No 87 y/n> y 88 If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth 89 Log in and authorize rclone for access 90 Waiting for code... 91 Got code 92 -------------------- 93 [remote] 94 client_id = your client ID goes here 95 client_secret = your client secret goes here 96 auth_url = Optional auth URL 97 token_url = Optional token URL 98 token = {"access_token":"xxxxxxxxxxxxxxxxxxxxxxx","token_type":"bearer","refresh_token":"xxxxxxxxxxxxxxxxxx","expiry":"2015-09-06T16:07:39.658438471+01:00"} 99 -------------------- 100 y) Yes this is OK 101 e) Edit this remote 102 d) Delete this remote 103 y/e/d> y 104 ``` 105 106 See the [remote setup docs](/remote_setup/) for how to set it up on a 107 machine with no Internet browser available. 108 109 Note that rclone runs a webserver on your local machine to collect the 110 token as returned from Amazon. This only runs from the moment it 111 opens your browser to the moment you get back the verification 112 code. This is on `http://127.0.0.1:53682/` and this it may require 113 you to unblock it temporarily if you are running a host firewall. 114 115 Once configured you can then use `rclone` like this, 116 117 List directories in top level of your Amazon Drive 118 119 rclone lsd remote: 120 121 List all the files in your Amazon Drive 122 123 rclone ls remote: 124 125 To copy a local directory to an Amazon Drive directory called backup 126 127 rclone copy /home/source remote:backup 128 129 ### Modified time and MD5SUMs ### 130 131 Amazon Drive doesn't allow modification times to be changed via 132 the API so these won't be accurate or used for syncing. 133 134 It does store MD5SUMs so for a more accurate sync, you can use the 135 `--checksum` flag. 136 137 #### Restricted filename characters 138 139 | Character | Value | Replacement | 140 | --------- |:-----:|:-----------:| 141 | NUL | 0x00 | ␀ | 142 | / | 0x2F | / | 143 144 Invalid UTF-8 bytes will also be [replaced](/overview/#invalid-utf8), 145 as they can't be used in JSON strings. 146 147 ### Deleting files ### 148 149 Any files you delete with rclone will end up in the trash. Amazon 150 don't provide an API to permanently delete files, nor to empty the 151 trash, so you will have to do that with one of Amazon's apps or via 152 the Amazon Drive website. As of November 17, 2016, files are 153 automatically deleted by Amazon from the trash after 30 days. 154 155 ### Using with non `.com` Amazon accounts ### 156 157 Let's say you usually use `amazon.co.uk`. When you authenticate with 158 rclone it will take you to an `amazon.com` page to log in. Your 159 `amazon.co.uk` email and password should work here just fine. 160 161 {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/amazonclouddrive/amazonclouddrive.go then run make backenddocs" >}} 162 ### Standard Options 163 164 Here are the standard options specific to amazon cloud drive (Amazon Drive). 165 166 #### --acd-client-id 167 168 Amazon Application Client ID. 169 170 - Config: client_id 171 - Env Var: RCLONE_ACD_CLIENT_ID 172 - Type: string 173 - Default: "" 174 175 #### --acd-client-secret 176 177 Amazon Application Client Secret. 178 179 - Config: client_secret 180 - Env Var: RCLONE_ACD_CLIENT_SECRET 181 - Type: string 182 - Default: "" 183 184 ### Advanced Options 185 186 Here are the advanced options specific to amazon cloud drive (Amazon Drive). 187 188 #### --acd-auth-url 189 190 Auth server URL. 191 Leave blank to use Amazon's. 192 193 - Config: auth_url 194 - Env Var: RCLONE_ACD_AUTH_URL 195 - Type: string 196 - Default: "" 197 198 #### --acd-token-url 199 200 Token server url. 201 leave blank to use Amazon's. 202 203 - Config: token_url 204 - Env Var: RCLONE_ACD_TOKEN_URL 205 - Type: string 206 - Default: "" 207 208 #### --acd-checkpoint 209 210 Checkpoint for internal polling (debug). 211 212 - Config: checkpoint 213 - Env Var: RCLONE_ACD_CHECKPOINT 214 - Type: string 215 - Default: "" 216 217 #### --acd-upload-wait-per-gb 218 219 Additional time per GB to wait after a failed complete upload to see if it appears. 220 221 Sometimes Amazon Drive gives an error when a file has been fully 222 uploaded but the file appears anyway after a little while. This 223 happens sometimes for files over 1GB in size and nearly every time for 224 files bigger than 10GB. This parameter controls the time rclone waits 225 for the file to appear. 226 227 The default value for this parameter is 3 minutes per GB, so by 228 default it will wait 3 minutes for every GB uploaded to see if the 229 file appears. 230 231 You can disable this feature by setting it to 0. This may cause 232 conflict errors as rclone retries the failed upload but the file will 233 most likely appear correctly eventually. 234 235 These values were determined empirically by observing lots of uploads 236 of big files for a range of file sizes. 237 238 Upload with the "-v" flag to see more info about what rclone is doing 239 in this situation. 240 241 - Config: upload_wait_per_gb 242 - Env Var: RCLONE_ACD_UPLOAD_WAIT_PER_GB 243 - Type: Duration 244 - Default: 3m0s 245 246 #### --acd-templink-threshold 247 248 Files >= this size will be downloaded via their tempLink. 249 250 Files this size or more will be downloaded via their "tempLink". This 251 is to work around a problem with Amazon Drive which blocks downloads 252 of files bigger than about 10GB. The default for this is 9GB which 253 shouldn't need to be changed. 254 255 To download files above this threshold, rclone requests a "tempLink" 256 which downloads the file through a temporary URL directly from the 257 underlying S3 storage. 258 259 - Config: templink_threshold 260 - Env Var: RCLONE_ACD_TEMPLINK_THRESHOLD 261 - Type: SizeSuffix 262 - Default: 9G 263 264 #### --acd-encoding 265 266 This sets the encoding for the backend. 267 268 See: the [encoding section in the overview](/overview/#encoding) for more info. 269 270 - Config: encoding 271 - Env Var: RCLONE_ACD_ENCODING 272 - Type: MultiEncoder 273 - Default: Slash,InvalidUtf8,Dot 274 275 {{< rem autogenerated options stop >}} 276 277 ### Limitations ### 278 279 Note that Amazon Drive is case insensitive so you can't have a 280 file called "Hello.doc" and one called "hello.doc". 281 282 Amazon Drive has rate limiting so you may notice errors in the 283 sync (429 errors). rclone will automatically retry the sync up to 3 284 times by default (see `--retries` flag) which should hopefully work 285 around this problem. 286 287 Amazon Drive has an internal limit of file sizes that can be uploaded 288 to the service. This limit is not officially published, but all files 289 larger than this will fail. 290 291 At the time of writing (Jan 2016) is in the area of 50GB per file. 292 This means that larger files are likely to fail. 293 294 Unfortunately there is no way for rclone to see that this failure is 295 because of file size, so it will retry the operation, as any other 296 failure. To avoid this problem, use `--max-size 50000M` option to limit 297 the maximum size of uploaded files. Note that `--max-size` does not split 298 files into segments, it only ignores files over this size.