github.com/ncw/rclone@v1.48.1-0.20190724201158-a35aa1360e3e/docs/content/googlecloudstorage.md (about) 1 --- 2 title: "Google Cloud Storage" 3 description: "Rclone docs for Google Cloud Storage" 4 date: "2017-07-18" 5 --- 6 7 <i class="fa fa-google"></i> Google Cloud Storage 8 ------------------------------------------------- 9 10 Paths are specified as `remote:bucket` (or `remote:` for the `lsd` 11 command.) You may put subdirectories in too, eg `remote:bucket/path/to/dir`. 12 13 The initial setup for google cloud storage involves getting a token from Google Cloud Storage 14 which you need to do in your browser. `rclone config` walks you 15 through it. 16 17 Here is an example of how to make a remote called `remote`. First run: 18 19 rclone config 20 21 This will guide you through an interactive setup process: 22 23 ``` 24 n) New remote 25 d) Delete remote 26 q) Quit config 27 e/n/d/q> n 28 name> remote 29 Type of storage to configure. 30 Choose a number from below, or type in your own value 31 1 / Amazon Drive 32 \ "amazon cloud drive" 33 2 / Amazon S3 (also Dreamhost, Ceph, Minio) 34 \ "s3" 35 3 / Backblaze B2 36 \ "b2" 37 4 / Dropbox 38 \ "dropbox" 39 5 / Encrypt/Decrypt a remote 40 \ "crypt" 41 6 / Google Cloud Storage (this is not Google Drive) 42 \ "google cloud storage" 43 7 / Google Drive 44 \ "drive" 45 8 / Hubic 46 \ "hubic" 47 9 / Local Disk 48 \ "local" 49 10 / Microsoft OneDrive 50 \ "onedrive" 51 11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) 52 \ "swift" 53 12 / SSH/SFTP Connection 54 \ "sftp" 55 13 / Yandex Disk 56 \ "yandex" 57 Storage> 6 58 Google Application Client Id - leave blank normally. 59 client_id> 60 Google Application Client Secret - leave blank normally. 61 client_secret> 62 Project number optional - needed only for list/create/delete buckets - see your developer console. 63 project_number> 12345678 64 Service Account Credentials JSON file path - needed only if you want use SA instead of interactive login. 65 service_account_file> 66 Access Control List for new objects. 67 Choose a number from below, or type in your own value 68 1 / Object owner gets OWNER access, and all Authenticated Users get READER access. 69 \ "authenticatedRead" 70 2 / Object owner gets OWNER access, and project team owners get OWNER access. 71 \ "bucketOwnerFullControl" 72 3 / Object owner gets OWNER access, and project team owners get READER access. 73 \ "bucketOwnerRead" 74 4 / Object owner gets OWNER access [default if left blank]. 75 \ "private" 76 5 / Object owner gets OWNER access, and project team members get access according to their roles. 77 \ "projectPrivate" 78 6 / Object owner gets OWNER access, and all Users get READER access. 79 \ "publicRead" 80 object_acl> 4 81 Access Control List for new buckets. 82 Choose a number from below, or type in your own value 83 1 / Project team owners get OWNER access, and all Authenticated Users get READER access. 84 \ "authenticatedRead" 85 2 / Project team owners get OWNER access [default if left blank]. 86 \ "private" 87 3 / Project team members get access according to their roles. 88 \ "projectPrivate" 89 4 / Project team owners get OWNER access, and all Users get READER access. 90 \ "publicRead" 91 5 / Project team owners get OWNER access, and all Users get WRITER access. 92 \ "publicReadWrite" 93 bucket_acl> 2 94 Location for the newly created buckets. 95 Choose a number from below, or type in your own value 96 1 / Empty for default location (US). 97 \ "" 98 2 / Multi-regional location for Asia. 99 \ "asia" 100 3 / Multi-regional location for Europe. 101 \ "eu" 102 4 / Multi-regional location for United States. 103 \ "us" 104 5 / Taiwan. 105 \ "asia-east1" 106 6 / Tokyo. 107 \ "asia-northeast1" 108 7 / Singapore. 109 \ "asia-southeast1" 110 8 / Sydney. 111 \ "australia-southeast1" 112 9 / Belgium. 113 \ "europe-west1" 114 10 / London. 115 \ "europe-west2" 116 11 / Iowa. 117 \ "us-central1" 118 12 / South Carolina. 119 \ "us-east1" 120 13 / Northern Virginia. 121 \ "us-east4" 122 14 / Oregon. 123 \ "us-west1" 124 location> 12 125 The storage class to use when storing objects in Google Cloud Storage. 126 Choose a number from below, or type in your own value 127 1 / Default 128 \ "" 129 2 / Multi-regional storage class 130 \ "MULTI_REGIONAL" 131 3 / Regional storage class 132 \ "REGIONAL" 133 4 / Nearline storage class 134 \ "NEARLINE" 135 5 / Coldline storage class 136 \ "COLDLINE" 137 6 / Durable reduced availability storage class 138 \ "DURABLE_REDUCED_AVAILABILITY" 139 storage_class> 5 140 Remote config 141 Use auto config? 142 * Say Y if not sure 143 * Say N if you are working on a remote or headless machine or Y didn't work 144 y) Yes 145 n) No 146 y/n> y 147 If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth 148 Log in and authorize rclone for access 149 Waiting for code... 150 Got code 151 -------------------- 152 [remote] 153 type = google cloud storage 154 client_id = 155 client_secret = 156 token = {"AccessToken":"xxxx.xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"x/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx_xxxxxxxxx","Expiry":"2014-07-17T20:49:14.929208288+01:00","Extra":null} 157 project_number = 12345678 158 object_acl = private 159 bucket_acl = private 160 -------------------- 161 y) Yes this is OK 162 e) Edit this remote 163 d) Delete this remote 164 y/e/d> y 165 ``` 166 167 Note that rclone runs a webserver on your local machine to collect the 168 token as returned from Google if you use auto config mode. This only 169 runs from the moment it opens your browser to the moment you get back 170 the verification code. This is on `http://127.0.0.1:53682/` and this 171 it may require you to unblock it temporarily if you are running a host 172 firewall, or use manual mode. 173 174 This remote is called `remote` and can now be used like this 175 176 See all the buckets in your project 177 178 rclone lsd remote: 179 180 Make a new bucket 181 182 rclone mkdir remote:bucket 183 184 List the contents of a bucket 185 186 rclone ls remote:bucket 187 188 Sync `/home/local/directory` to the remote bucket, deleting any excess 189 files in the bucket. 190 191 rclone sync /home/local/directory remote:bucket 192 193 ### Service Account support ### 194 195 You can set up rclone with Google Cloud Storage in an unattended mode, 196 i.e. not tied to a specific end-user Google account. This is useful 197 when you want to synchronise files onto machines that don't have 198 actively logged-in users, for example build machines. 199 200 To get credentials for Google Cloud Platform 201 [IAM Service Accounts](https://cloud.google.com/iam/docs/service-accounts), 202 please head to the 203 [Service Account](https://console.cloud.google.com/permissions/serviceaccounts) 204 section of the Google Developer Console. Service Accounts behave just 205 like normal `User` permissions in 206 [Google Cloud Storage ACLs](https://cloud.google.com/storage/docs/access-control), 207 so you can limit their access (e.g. make them read only). After 208 creating an account, a JSON file containing the Service Account's 209 credentials will be downloaded onto your machines. These credentials 210 are what rclone will use for authentication. 211 212 To use a Service Account instead of OAuth2 token flow, enter the path 213 to your Service Account credentials at the `service_account_file` 214 prompt and rclone won't use the browser based authentication 215 flow. If you'd rather stuff the contents of the credentials file into 216 the rclone config file, you can set `service_account_credentials` with 217 the actual contents of the file instead, or set the equivalent 218 environment variable. 219 220 ### Application Default Credentials ### 221 222 If no other source of credentials is provided, rclone will fall back 223 to 224 [Application Default Credentials](https://cloud.google.com/video-intelligence/docs/common/auth#authenticating_with_application_default_credentials) 225 this is useful both when you already have configured authentication 226 for your developer account, or in production when running on a google 227 compute host. Note that if running in docker, you may need to run 228 additional commands on your google compute machine - 229 [see this page](https://cloud.google.com/container-registry/docs/advanced-authentication#gcloud_as_a_docker_credential_helper). 230 231 Note that in the case application default credentials are used, there 232 is no need to explicitly configure a project number. 233 234 ### --fast-list ### 235 236 This remote supports `--fast-list` which allows you to use fewer 237 transactions in exchange for more memory. See the [rclone 238 docs](/docs/#fast-list) for more details. 239 240 ### Modified time ### 241 242 Google google cloud storage stores md5sums natively and rclone stores 243 modification times as metadata on the object, under the "mtime" key in 244 RFC3339 format accurate to 1ns. 245 246 <!--- autogenerated options start - DO NOT EDIT, instead edit fs.RegInfo in backend/googlecloudstorage/googlecloudstorage.go then run make backenddocs --> 247 ### Standard Options 248 249 Here are the standard options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)). 250 251 #### --gcs-client-id 252 253 Google Application Client Id 254 Leave blank normally. 255 256 - Config: client_id 257 - Env Var: RCLONE_GCS_CLIENT_ID 258 - Type: string 259 - Default: "" 260 261 #### --gcs-client-secret 262 263 Google Application Client Secret 264 Leave blank normally. 265 266 - Config: client_secret 267 - Env Var: RCLONE_GCS_CLIENT_SECRET 268 - Type: string 269 - Default: "" 270 271 #### --gcs-project-number 272 273 Project number. 274 Optional - needed only for list/create/delete buckets - see your developer console. 275 276 - Config: project_number 277 - Env Var: RCLONE_GCS_PROJECT_NUMBER 278 - Type: string 279 - Default: "" 280 281 #### --gcs-service-account-file 282 283 Service Account Credentials JSON file path 284 Leave blank normally. 285 Needed only if you want use SA instead of interactive login. 286 287 - Config: service_account_file 288 - Env Var: RCLONE_GCS_SERVICE_ACCOUNT_FILE 289 - Type: string 290 - Default: "" 291 292 #### --gcs-service-account-credentials 293 294 Service Account Credentials JSON blob 295 Leave blank normally. 296 Needed only if you want use SA instead of interactive login. 297 298 - Config: service_account_credentials 299 - Env Var: RCLONE_GCS_SERVICE_ACCOUNT_CREDENTIALS 300 - Type: string 301 - Default: "" 302 303 #### --gcs-object-acl 304 305 Access Control List for new objects. 306 307 - Config: object_acl 308 - Env Var: RCLONE_GCS_OBJECT_ACL 309 - Type: string 310 - Default: "" 311 - Examples: 312 - "authenticatedRead" 313 - Object owner gets OWNER access, and all Authenticated Users get READER access. 314 - "bucketOwnerFullControl" 315 - Object owner gets OWNER access, and project team owners get OWNER access. 316 - "bucketOwnerRead" 317 - Object owner gets OWNER access, and project team owners get READER access. 318 - "private" 319 - Object owner gets OWNER access [default if left blank]. 320 - "projectPrivate" 321 - Object owner gets OWNER access, and project team members get access according to their roles. 322 - "publicRead" 323 - Object owner gets OWNER access, and all Users get READER access. 324 325 #### --gcs-bucket-acl 326 327 Access Control List for new buckets. 328 329 - Config: bucket_acl 330 - Env Var: RCLONE_GCS_BUCKET_ACL 331 - Type: string 332 - Default: "" 333 - Examples: 334 - "authenticatedRead" 335 - Project team owners get OWNER access, and all Authenticated Users get READER access. 336 - "private" 337 - Project team owners get OWNER access [default if left blank]. 338 - "projectPrivate" 339 - Project team members get access according to their roles. 340 - "publicRead" 341 - Project team owners get OWNER access, and all Users get READER access. 342 - "publicReadWrite" 343 - Project team owners get OWNER access, and all Users get WRITER access. 344 345 #### --gcs-bucket-policy-only 346 347 Access checks should use bucket-level IAM policies. 348 349 If you want to upload objects to a bucket with Bucket Policy Only set 350 then you will need to set this. 351 352 When it is set, rclone: 353 354 - ignores ACLs set on buckets 355 - ignores ACLs set on objects 356 - creates buckets with Bucket Policy Only set 357 358 Docs: https://cloud.google.com/storage/docs/bucket-policy-only 359 360 361 - Config: bucket_policy_only 362 - Env Var: RCLONE_GCS_BUCKET_POLICY_ONLY 363 - Type: bool 364 - Default: false 365 366 #### --gcs-location 367 368 Location for the newly created buckets. 369 370 - Config: location 371 - Env Var: RCLONE_GCS_LOCATION 372 - Type: string 373 - Default: "" 374 - Examples: 375 - "" 376 - Empty for default location (US). 377 - "asia" 378 - Multi-regional location for Asia. 379 - "eu" 380 - Multi-regional location for Europe. 381 - "us" 382 - Multi-regional location for United States. 383 - "asia-east1" 384 - Taiwan. 385 - "asia-east2" 386 - Hong Kong. 387 - "asia-northeast1" 388 - Tokyo. 389 - "asia-south1" 390 - Mumbai. 391 - "asia-southeast1" 392 - Singapore. 393 - "australia-southeast1" 394 - Sydney. 395 - "europe-north1" 396 - Finland. 397 - "europe-west1" 398 - Belgium. 399 - "europe-west2" 400 - London. 401 - "europe-west3" 402 - Frankfurt. 403 - "europe-west4" 404 - Netherlands. 405 - "us-central1" 406 - Iowa. 407 - "us-east1" 408 - South Carolina. 409 - "us-east4" 410 - Northern Virginia. 411 - "us-west1" 412 - Oregon. 413 - "us-west2" 414 - California. 415 416 #### --gcs-storage-class 417 418 The storage class to use when storing objects in Google Cloud Storage. 419 420 - Config: storage_class 421 - Env Var: RCLONE_GCS_STORAGE_CLASS 422 - Type: string 423 - Default: "" 424 - Examples: 425 - "" 426 - Default 427 - "MULTI_REGIONAL" 428 - Multi-regional storage class 429 - "REGIONAL" 430 - Regional storage class 431 - "NEARLINE" 432 - Nearline storage class 433 - "COLDLINE" 434 - Coldline storage class 435 - "DURABLE_REDUCED_AVAILABILITY" 436 - Durable reduced availability storage class 437 438 <!--- autogenerated options stop -->