github.com/xhghs/rclone@v1.51.1-0.20200430155106-e186a28cced8/docs/content/azureblob.md (about) 1 --- 2 title: "Microsoft Azure Blob Storage" 3 description: "Rclone docs for Microsoft Azure Blob Storage" 4 date: "2017-07-30" 5 --- 6 7 <i class="fab fa-windows"></i> Microsoft Azure Blob Storage 8 ----------------------------------------- 9 10 Paths are specified as `remote:container` (or `remote:` for the `lsd` 11 command.) You may put subdirectories in too, eg 12 `remote:container/path/to/dir`. 13 14 Here is an example of making a Microsoft Azure Blob Storage 15 configuration. For a remote called `remote`. First run: 16 17 rclone config 18 19 This will guide you through an interactive setup process: 20 21 ``` 22 No remotes found - make a new one 23 n) New remote 24 s) Set configuration password 25 q) Quit config 26 n/s/q> n 27 name> remote 28 Type of storage to configure. 29 Choose a number from below, or type in your own value 30 [snip] 31 XX / Microsoft Azure Blob Storage 32 \ "azureblob" 33 [snip] 34 Storage> azureblob 35 Storage Account Name 36 account> account_name 37 Storage Account Key 38 key> base64encodedkey== 39 Endpoint for the service - leave blank normally. 40 endpoint> 41 Remote config 42 -------------------- 43 [remote] 44 account = account_name 45 key = base64encodedkey== 46 endpoint = 47 -------------------- 48 y) Yes this is OK 49 e) Edit this remote 50 d) Delete this remote 51 y/e/d> y 52 ``` 53 54 See all containers 55 56 rclone lsd remote: 57 58 Make a new container 59 60 rclone mkdir remote:container 61 62 List the contents of a container 63 64 rclone ls remote:container 65 66 Sync `/home/local/directory` to the remote container, deleting any excess 67 files in the container. 68 69 rclone sync /home/local/directory remote:container 70 71 ### --fast-list ### 72 73 This remote supports `--fast-list` which allows you to use fewer 74 transactions in exchange for more memory. See the [rclone 75 docs](/docs/#fast-list) for more details. 76 77 ### Modified time ### 78 79 The modified time is stored as metadata on the object with the `mtime` 80 key. It is stored using RFC3339 Format time with nanosecond 81 precision. The metadata is supplied during directory listings so 82 there is no overhead to using it. 83 84 ### Restricted filename characters 85 86 In addition to the [default restricted characters set](/overview/#restricted-characters) 87 the following characters are also replaced: 88 89 | Character | Value | Replacement | 90 | --------- |:-----:|:-----------:| 91 | / | 0x2F | / | 92 | \ | 0x5C | \ | 93 94 File names can also not end with the following characters. 95 These only get replaced if they are last character in the name: 96 97 | Character | Value | Replacement | 98 | --------- |:-----:|:-----------:| 99 | . | 0x2E | . | 100 101 Invalid UTF-8 bytes will also be [replaced](/overview/#invalid-utf8), 102 as they can't be used in JSON strings. 103 104 ### Hashes ### 105 106 MD5 hashes are stored with blobs. However blobs that were uploaded in 107 chunks only have an MD5 if the source remote was capable of MD5 108 hashes, eg the local disk. 109 110 ### Authenticating with Azure Blob Storage 111 112 Rclone has 3 ways of authenticating with Azure Blob Storage: 113 114 #### Account and Key 115 116 This is the most straight forward and least flexible way. Just fill in the `account` and `key` lines and leave the rest blank. 117 118 #### SAS URL 119 120 This can be an account level SAS URL or container level SAS URL 121 122 To use it leave `account`, `key` blank and fill in `sas_url`. 123 124 Account level SAS URL or container level SAS URL can be obtained from Azure portal or Azure Storage Explorer. 125 To get a container level SAS URL right click on a container in the Azure Blob explorer in the Azure portal. 126 127 If You use container level SAS URL, rclone operations are permitted only on particular container, eg 128 129 rclone ls azureblob:container or rclone ls azureblob: 130 131 Since container name already exists in SAS URL, you can leave it empty as well. 132 133 However these will not work 134 135 rclone lsd azureblob: 136 rclone ls azureblob:othercontainer 137 138 This would be useful for temporarily allowing third parties access to a single container or putting credentials into an untrusted environment. 139 140 ### Multipart uploads ### 141 142 Rclone supports multipart uploads with Azure Blob storage. Files 143 bigger than 256MB will be uploaded using chunked upload by default. 144 145 The files will be uploaded in parallel in 4MB chunks (by default). 146 Note that these chunks are buffered in memory and there may be up to 147 `--transfers` of them being uploaded at once. 148 149 Files can't be split into more than 50,000 chunks so by default, so 150 the largest file that can be uploaded with 4MB chunk size is 195GB. 151 Above this rclone will double the chunk size until it creates less 152 than 50,000 chunks. By default this will mean a maximum file size of 153 3.2TB can be uploaded. This can be raised to 5TB using 154 `--azureblob-chunk-size 100M`. 155 156 Note that rclone doesn't commit the block list until the end of the 157 upload which means that there is a limit of 9.5TB of multipart uploads 158 in progress as Azure won't allow more than that amount of uncommitted 159 blocks. 160 161 <!--- autogenerated options start - DO NOT EDIT, instead edit fs.RegInfo in backend/azureblob/azureblob.go then run make backenddocs --> 162 ### Standard Options 163 164 Here are the standard options specific to azureblob (Microsoft Azure Blob Storage). 165 166 #### --azureblob-account 167 168 Storage Account Name (leave blank to use SAS URL or Emulator) 169 170 - Config: account 171 - Env Var: RCLONE_AZUREBLOB_ACCOUNT 172 - Type: string 173 - Default: "" 174 175 #### --azureblob-key 176 177 Storage Account Key (leave blank to use SAS URL or Emulator) 178 179 - Config: key 180 - Env Var: RCLONE_AZUREBLOB_KEY 181 - Type: string 182 - Default: "" 183 184 #### --azureblob-sas-url 185 186 SAS URL for container level access only 187 (leave blank if using account/key or Emulator) 188 189 - Config: sas_url 190 - Env Var: RCLONE_AZUREBLOB_SAS_URL 191 - Type: string 192 - Default: "" 193 194 #### --azureblob-use-emulator 195 196 Uses local storage emulator if provided as 'true' (leave blank if using real azure storage endpoint) 197 198 - Config: use_emulator 199 - Env Var: RCLONE_AZUREBLOB_USE_EMULATOR 200 - Type: bool 201 - Default: false 202 203 ### Advanced Options 204 205 Here are the advanced options specific to azureblob (Microsoft Azure Blob Storage). 206 207 #### --azureblob-endpoint 208 209 Endpoint for the service 210 Leave blank normally. 211 212 - Config: endpoint 213 - Env Var: RCLONE_AZUREBLOB_ENDPOINT 214 - Type: string 215 - Default: "" 216 217 #### --azureblob-upload-cutoff 218 219 Cutoff for switching to chunked upload (<= 256MB). 220 221 - Config: upload_cutoff 222 - Env Var: RCLONE_AZUREBLOB_UPLOAD_CUTOFF 223 - Type: SizeSuffix 224 - Default: 256M 225 226 #### --azureblob-chunk-size 227 228 Upload chunk size (<= 100MB). 229 230 Note that this is stored in memory and there may be up to 231 "--transfers" chunks stored at once in memory. 232 233 - Config: chunk_size 234 - Env Var: RCLONE_AZUREBLOB_CHUNK_SIZE 235 - Type: SizeSuffix 236 - Default: 4M 237 238 #### --azureblob-list-chunk 239 240 Size of blob list. 241 242 This sets the number of blobs requested in each listing chunk. Default 243 is the maximum, 5000. "List blobs" requests are permitted 2 minutes 244 per megabyte to complete. If an operation is taking longer than 2 245 minutes per megabyte on average, it will time out ( 246 [source](https://docs.microsoft.com/en-us/rest/api/storageservices/setting-timeouts-for-blob-service-operations#exceptions-to-default-timeout-interval) 247 ). This can be used to limit the number of blobs items to return, to 248 avoid the time out. 249 250 - Config: list_chunk 251 - Env Var: RCLONE_AZUREBLOB_LIST_CHUNK 252 - Type: int 253 - Default: 5000 254 255 #### --azureblob-access-tier 256 257 Access tier of blob: hot, cool or archive. 258 259 Archived blobs can be restored by setting access tier to hot or 260 cool. Leave blank if you intend to use default access tier, which is 261 set at account level 262 263 If there is no "access tier" specified, rclone doesn't apply any tier. 264 rclone performs "Set Tier" operation on blobs while uploading, if objects 265 are not modified, specifying "access tier" to new one will have no effect. 266 If blobs are in "archive tier" at remote, trying to perform data transfer 267 operations from remote will not be allowed. User should first restore by 268 tiering blob to "Hot" or "Cool". 269 270 - Config: access_tier 271 - Env Var: RCLONE_AZUREBLOB_ACCESS_TIER 272 - Type: string 273 - Default: "" 274 275 #### --azureblob-encoding 276 277 This sets the encoding for the backend. 278 279 See: the [encoding section in the overview](/overview/#encoding) for more info. 280 281 - Config: encoding 282 - Env Var: RCLONE_AZUREBLOB_ENCODING 283 - Type: MultiEncoder 284 - Default: Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8 285 286 <!--- autogenerated options stop --> 287 288 ### Limitations ### 289 290 MD5 sums are only uploaded with chunked files if the source has an MD5 291 sum. This will always be the case for a local to azure copy. 292 293 ### Azure Storage Emulator Support ### 294 You can test rlcone with storage emulator locally, to do this make sure azure storage emulator 295 installed locally and set up a new remote with `rclone config` follow instructions described in 296 introduction, set `use_emulator` config as `true`, you do not need to provide default account name 297 or key if using emulator.