github.com/rclone/rclone@v1.66.1-0.20240517100346-7b89735ae726/docs/content/googlecloudstorage.md (about)

     1  ---
     2  title: "Google Cloud Storage"
     3  description: "Rclone docs for Google Cloud Storage"
     4  versionIntroduced: "v1.02"
     5  ---
     6  
     7  # {{< icon "fab fa-google" >}} Google Cloud Storage
     8  
     9  Paths are specified as `remote:bucket` (or `remote:` for the `lsd`
    10  command.)  You may put subdirectories in too, e.g. `remote:bucket/path/to/dir`.
    11  
    12  ## Configuration
    13  
    14  The initial setup for google cloud storage involves getting a token from Google Cloud Storage
    15  which you need to do in your browser.  `rclone config` walks you
    16  through it.
    17  
    18  Here is an example of how to make a remote called `remote`.  First run:
    19  
    20       rclone config
    21  
    22  This will guide you through an interactive setup process:
    23  
    24  ```
    25  n) New remote
    26  d) Delete remote
    27  q) Quit config
    28  e/n/d/q> n
    29  name> remote
    30  Type of storage to configure.
    31  Choose a number from below, or type in your own value
    32  [snip]
    33  XX / Google Cloud Storage (this is not Google Drive)
    34     \ "google cloud storage"
    35  [snip]
    36  Storage> google cloud storage
    37  Google Application Client Id - leave blank normally.
    38  client_id>
    39  Google Application Client Secret - leave blank normally.
    40  client_secret>
    41  Project number optional - needed only for list/create/delete buckets - see your developer console.
    42  project_number> 12345678
    43  Service Account Credentials JSON file path - needed only if you want use SA instead of interactive login.
    44  service_account_file>
    45  Access Control List for new objects.
    46  Choose a number from below, or type in your own value
    47   1 / Object owner gets OWNER access, and all Authenticated Users get READER access.
    48     \ "authenticatedRead"
    49   2 / Object owner gets OWNER access, and project team owners get OWNER access.
    50     \ "bucketOwnerFullControl"
    51   3 / Object owner gets OWNER access, and project team owners get READER access.
    52     \ "bucketOwnerRead"
    53   4 / Object owner gets OWNER access [default if left blank].
    54     \ "private"
    55   5 / Object owner gets OWNER access, and project team members get access according to their roles.
    56     \ "projectPrivate"
    57   6 / Object owner gets OWNER access, and all Users get READER access.
    58     \ "publicRead"
    59  object_acl> 4
    60  Access Control List for new buckets.
    61  Choose a number from below, or type in your own value
    62   1 / Project team owners get OWNER access, and all Authenticated Users get READER access.
    63     \ "authenticatedRead"
    64   2 / Project team owners get OWNER access [default if left blank].
    65     \ "private"
    66   3 / Project team members get access according to their roles.
    67     \ "projectPrivate"
    68   4 / Project team owners get OWNER access, and all Users get READER access.
    69     \ "publicRead"
    70   5 / Project team owners get OWNER access, and all Users get WRITER access.
    71     \ "publicReadWrite"
    72  bucket_acl> 2
    73  Location for the newly created buckets.
    74  Choose a number from below, or type in your own value
    75   1 / Empty for default location (US).
    76     \ ""
    77   2 / Multi-regional location for Asia.
    78     \ "asia"
    79   3 / Multi-regional location for Europe.
    80     \ "eu"
    81   4 / Multi-regional location for United States.
    82     \ "us"
    83   5 / Taiwan.
    84     \ "asia-east1"
    85   6 / Tokyo.
    86     \ "asia-northeast1"
    87   7 / Singapore.
    88     \ "asia-southeast1"
    89   8 / Sydney.
    90     \ "australia-southeast1"
    91   9 / Belgium.
    92     \ "europe-west1"
    93  10 / London.
    94     \ "europe-west2"
    95  11 / Iowa.
    96     \ "us-central1"
    97  12 / South Carolina.
    98     \ "us-east1"
    99  13 / Northern Virginia.
   100     \ "us-east4"
   101  14 / Oregon.
   102     \ "us-west1"
   103  location> 12
   104  The storage class to use when storing objects in Google Cloud Storage.
   105  Choose a number from below, or type in your own value
   106   1 / Default
   107     \ ""
   108   2 / Multi-regional storage class
   109     \ "MULTI_REGIONAL"
   110   3 / Regional storage class
   111     \ "REGIONAL"
   112   4 / Nearline storage class
   113     \ "NEARLINE"
   114   5 / Coldline storage class
   115     \ "COLDLINE"
   116   6 / Durable reduced availability storage class
   117     \ "DURABLE_REDUCED_AVAILABILITY"
   118  storage_class> 5
   119  Remote config
   120  Use web browser to automatically authenticate rclone with remote?
   121   * Say Y if the machine running rclone has a web browser you can use
   122   * Say N if running rclone on a (remote) machine without web browser access
   123  If not sure try Y. If Y failed, try N.
   124  y) Yes
   125  n) No
   126  y/n> y
   127  If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
   128  Log in and authorize rclone for access
   129  Waiting for code...
   130  Got code
   131  --------------------
   132  [remote]
   133  type = google cloud storage
   134  client_id =
   135  client_secret =
   136  token = {"AccessToken":"xxxx.xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"x/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx_xxxxxxxxx","Expiry":"2014-07-17T20:49:14.929208288+01:00","Extra":null}
   137  project_number = 12345678
   138  object_acl = private
   139  bucket_acl = private
   140  --------------------
   141  y) Yes this is OK
   142  e) Edit this remote
   143  d) Delete this remote
   144  y/e/d> y
   145  ```
   146  
   147  See the [remote setup docs](/remote_setup/) for how to set it up on a
   148  machine with no Internet browser available.
   149  
   150  Note that rclone runs a webserver on your local machine to collect the
   151  token as returned from Google if using web browser to automatically 
   152  authenticate. This only
   153  runs from the moment it opens your browser to the moment you get back
   154  the verification code.  This is on `http://127.0.0.1:53682/` and this
   155  it may require you to unblock it temporarily if you are running a host
   156  firewall, or use manual mode.
   157  
   158  This remote is called `remote` and can now be used like this
   159  
   160  See all the buckets in your project
   161  
   162      rclone lsd remote:
   163  
   164  Make a new bucket
   165  
   166      rclone mkdir remote:bucket
   167  
   168  List the contents of a bucket
   169  
   170      rclone ls remote:bucket
   171  
   172  Sync `/home/local/directory` to the remote bucket, deleting any excess
   173  files in the bucket.
   174  
   175      rclone sync --interactive /home/local/directory remote:bucket
   176  
   177  ### Service Account support
   178  
   179  You can set up rclone with Google Cloud Storage in an unattended mode,
   180  i.e. not tied to a specific end-user Google account. This is useful
   181  when you want to synchronise files onto machines that don't have
   182  actively logged-in users, for example build machines.
   183  
   184  To get credentials for Google Cloud Platform
   185  [IAM Service Accounts](https://cloud.google.com/iam/docs/service-accounts),
   186  please head to the
   187  [Service Account](https://console.cloud.google.com/permissions/serviceaccounts)
   188  section of the Google Developer Console. Service Accounts behave just
   189  like normal `User` permissions in
   190  [Google Cloud Storage ACLs](https://cloud.google.com/storage/docs/access-control),
   191  so you can limit their access (e.g. make them read only). After
   192  creating an account, a JSON file containing the Service Account's
   193  credentials will be downloaded onto your machines. These credentials
   194  are what rclone will use for authentication.
   195  
   196  To use a Service Account instead of OAuth2 token flow, enter the path
   197  to your Service Account credentials at the `service_account_file`
   198  prompt and rclone won't use the browser based authentication
   199  flow. If you'd rather stuff the contents of the credentials file into
   200  the rclone config file, you can set `service_account_credentials` with
   201  the actual contents of the file instead, or set the equivalent
   202  environment variable.
   203  
   204  ### Anonymous Access
   205  
   206  For downloads of objects that permit public access you can configure rclone
   207  to use anonymous access by setting `anonymous` to `true`.
   208  With unauthorized access you can't write or create files but only read or list
   209  those buckets and objects that have public read access.
   210  
   211  ### Application Default Credentials
   212  
   213  If no other source of credentials is provided, rclone will fall back
   214  to
   215  [Application Default Credentials](https://cloud.google.com/video-intelligence/docs/common/auth#authenticating_with_application_default_credentials)
   216  this is useful both when you already have configured authentication
   217  for your developer account, or in production when running on a google
   218  compute host. Note that if running in docker, you may need to run
   219  additional commands on your google compute machine -
   220  [see this page](https://cloud.google.com/container-registry/docs/advanced-authentication#gcloud_as_a_docker_credential_helper).
   221  
   222  Note that in the case application default credentials are used, there
   223  is no need to explicitly configure a project number.
   224  
   225  ### --fast-list
   226  
   227  This remote supports `--fast-list` which allows you to use fewer
   228  transactions in exchange for more memory. See the [rclone
   229  docs](/docs/#fast-list) for more details.
   230  
   231  ### Custom upload headers
   232  
   233  You can set custom upload headers with the `--header-upload`
   234  flag. Google Cloud Storage supports the headers as described in the
   235  [working with metadata documentation](https://cloud.google.com/storage/docs/gsutil/addlhelp/WorkingWithObjectMetadata)
   236  
   237  - Cache-Control
   238  - Content-Disposition
   239  - Content-Encoding
   240  - Content-Language
   241  - Content-Type
   242  - X-Goog-Storage-Class
   243  - X-Goog-Meta-
   244  
   245  Eg `--header-upload "Content-Type text/potato"`
   246  
   247  Note that the last of these is for setting custom metadata in the form
   248  `--header-upload "x-goog-meta-key: value"`
   249  
   250  ### Modification times
   251  
   252  Google Cloud Storage stores md5sum natively.
   253  Google's [gsutil](https://cloud.google.com/storage/docs/gsutil) tool stores modification time
   254  with one-second precision as `goog-reserved-file-mtime` in file metadata.
   255  
   256  To ensure compatibility with gsutil, rclone stores modification time in 2 separate metadata entries.
   257  `mtime` uses RFC3339 format with one-nanosecond precision.
   258  `goog-reserved-file-mtime` uses Unix timestamp format with one-second precision.
   259  To get modification time from object metadata, rclone reads the metadata in the following order: `mtime`, `goog-reserved-file-mtime`, object updated time.
   260  
   261  Note that rclone's default modify window is 1ns.
   262  Files uploaded by gsutil only contain timestamps with one-second precision.
   263  If you use rclone to sync files previously uploaded by gsutil,
   264  rclone will attempt to update modification time for all these files.
   265  To avoid these possibly unnecessary updates, use `--modify-window 1s`.
   266  
   267  ### Restricted filename characters
   268  
   269  | Character | Value | Replacement |
   270  | --------- |:-----:|:-----------:|
   271  | NUL       | 0x00  | ␀           |
   272  | LF        | 0x0A  | ␊           |
   273  | CR        | 0x0D  | ␍           |
   274  | /         | 0x2F  | /          |
   275  
   276  Invalid UTF-8 bytes will also be [replaced](/overview/#invalid-utf8),
   277  as they can't be used in JSON strings.
   278  
   279  {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/googlecloudstorage/googlecloudstorage.go then run make backenddocs" >}}
   280  ### Standard options
   281  
   282  Here are the Standard options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)).
   283  
   284  #### --gcs-client-id
   285  
   286  OAuth Client Id.
   287  
   288  Leave blank normally.
   289  
   290  Properties:
   291  
   292  - Config:      client_id
   293  - Env Var:     RCLONE_GCS_CLIENT_ID
   294  - Type:        string
   295  - Required:    false
   296  
   297  #### --gcs-client-secret
   298  
   299  OAuth Client Secret.
   300  
   301  Leave blank normally.
   302  
   303  Properties:
   304  
   305  - Config:      client_secret
   306  - Env Var:     RCLONE_GCS_CLIENT_SECRET
   307  - Type:        string
   308  - Required:    false
   309  
   310  #### --gcs-project-number
   311  
   312  Project number.
   313  
   314  Optional - needed only for list/create/delete buckets - see your developer console.
   315  
   316  Properties:
   317  
   318  - Config:      project_number
   319  - Env Var:     RCLONE_GCS_PROJECT_NUMBER
   320  - Type:        string
   321  - Required:    false
   322  
   323  #### --gcs-user-project
   324  
   325  User project.
   326  
   327  Optional - needed only for requester pays.
   328  
   329  Properties:
   330  
   331  - Config:      user_project
   332  - Env Var:     RCLONE_GCS_USER_PROJECT
   333  - Type:        string
   334  - Required:    false
   335  
   336  #### --gcs-service-account-file
   337  
   338  Service Account Credentials JSON file path.
   339  
   340  Leave blank normally.
   341  Needed only if you want use SA instead of interactive login.
   342  
   343  Leading `~` will be expanded in the file name as will environment variables such as `${RCLONE_CONFIG_DIR}`.
   344  
   345  Properties:
   346  
   347  - Config:      service_account_file
   348  - Env Var:     RCLONE_GCS_SERVICE_ACCOUNT_FILE
   349  - Type:        string
   350  - Required:    false
   351  
   352  #### --gcs-service-account-credentials
   353  
   354  Service Account Credentials JSON blob.
   355  
   356  Leave blank normally.
   357  Needed only if you want use SA instead of interactive login.
   358  
   359  Properties:
   360  
   361  - Config:      service_account_credentials
   362  - Env Var:     RCLONE_GCS_SERVICE_ACCOUNT_CREDENTIALS
   363  - Type:        string
   364  - Required:    false
   365  
   366  #### --gcs-anonymous
   367  
   368  Access public buckets and objects without credentials.
   369  
   370  Set to 'true' if you just want to download files and don't configure credentials.
   371  
   372  Properties:
   373  
   374  - Config:      anonymous
   375  - Env Var:     RCLONE_GCS_ANONYMOUS
   376  - Type:        bool
   377  - Default:     false
   378  
   379  #### --gcs-object-acl
   380  
   381  Access Control List for new objects.
   382  
   383  Properties:
   384  
   385  - Config:      object_acl
   386  - Env Var:     RCLONE_GCS_OBJECT_ACL
   387  - Type:        string
   388  - Required:    false
   389  - Examples:
   390      - "authenticatedRead"
   391          - Object owner gets OWNER access.
   392          - All Authenticated Users get READER access.
   393      - "bucketOwnerFullControl"
   394          - Object owner gets OWNER access.
   395          - Project team owners get OWNER access.
   396      - "bucketOwnerRead"
   397          - Object owner gets OWNER access.
   398          - Project team owners get READER access.
   399      - "private"
   400          - Object owner gets OWNER access.
   401          - Default if left blank.
   402      - "projectPrivate"
   403          - Object owner gets OWNER access.
   404          - Project team members get access according to their roles.
   405      - "publicRead"
   406          - Object owner gets OWNER access.
   407          - All Users get READER access.
   408  
   409  #### --gcs-bucket-acl
   410  
   411  Access Control List for new buckets.
   412  
   413  Properties:
   414  
   415  - Config:      bucket_acl
   416  - Env Var:     RCLONE_GCS_BUCKET_ACL
   417  - Type:        string
   418  - Required:    false
   419  - Examples:
   420      - "authenticatedRead"
   421          - Project team owners get OWNER access.
   422          - All Authenticated Users get READER access.
   423      - "private"
   424          - Project team owners get OWNER access.
   425          - Default if left blank.
   426      - "projectPrivate"
   427          - Project team members get access according to their roles.
   428      - "publicRead"
   429          - Project team owners get OWNER access.
   430          - All Users get READER access.
   431      - "publicReadWrite"
   432          - Project team owners get OWNER access.
   433          - All Users get WRITER access.
   434  
   435  #### --gcs-bucket-policy-only
   436  
   437  Access checks should use bucket-level IAM policies.
   438  
   439  If you want to upload objects to a bucket with Bucket Policy Only set
   440  then you will need to set this.
   441  
   442  When it is set, rclone:
   443  
   444  - ignores ACLs set on buckets
   445  - ignores ACLs set on objects
   446  - creates buckets with Bucket Policy Only set
   447  
   448  Docs: https://cloud.google.com/storage/docs/bucket-policy-only
   449  
   450  
   451  Properties:
   452  
   453  - Config:      bucket_policy_only
   454  - Env Var:     RCLONE_GCS_BUCKET_POLICY_ONLY
   455  - Type:        bool
   456  - Default:     false
   457  
   458  #### --gcs-location
   459  
   460  Location for the newly created buckets.
   461  
   462  Properties:
   463  
   464  - Config:      location
   465  - Env Var:     RCLONE_GCS_LOCATION
   466  - Type:        string
   467  - Required:    false
   468  - Examples:
   469      - ""
   470          - Empty for default location (US)
   471      - "asia"
   472          - Multi-regional location for Asia
   473      - "eu"
   474          - Multi-regional location for Europe
   475      - "us"
   476          - Multi-regional location for United States
   477      - "asia-east1"
   478          - Taiwan
   479      - "asia-east2"
   480          - Hong Kong
   481      - "asia-northeast1"
   482          - Tokyo
   483      - "asia-northeast2"
   484          - Osaka
   485      - "asia-northeast3"
   486          - Seoul
   487      - "asia-south1"
   488          - Mumbai
   489      - "asia-south2"
   490          - Delhi
   491      - "asia-southeast1"
   492          - Singapore
   493      - "asia-southeast2"
   494          - Jakarta
   495      - "australia-southeast1"
   496          - Sydney
   497      - "australia-southeast2"
   498          - Melbourne
   499      - "europe-north1"
   500          - Finland
   501      - "europe-west1"
   502          - Belgium
   503      - "europe-west2"
   504          - London
   505      - "europe-west3"
   506          - Frankfurt
   507      - "europe-west4"
   508          - Netherlands
   509      - "europe-west6"
   510          - Zürich
   511      - "europe-central2"
   512          - Warsaw
   513      - "us-central1"
   514          - Iowa
   515      - "us-east1"
   516          - South Carolina
   517      - "us-east4"
   518          - Northern Virginia
   519      - "us-west1"
   520          - Oregon
   521      - "us-west2"
   522          - California
   523      - "us-west3"
   524          - Salt Lake City
   525      - "us-west4"
   526          - Las Vegas
   527      - "northamerica-northeast1"
   528          - Montréal
   529      - "northamerica-northeast2"
   530          - Toronto
   531      - "southamerica-east1"
   532          - São Paulo
   533      - "southamerica-west1"
   534          - Santiago
   535      - "asia1"
   536          - Dual region: asia-northeast1 and asia-northeast2.
   537      - "eur4"
   538          - Dual region: europe-north1 and europe-west4.
   539      - "nam4"
   540          - Dual region: us-central1 and us-east1.
   541  
   542  #### --gcs-storage-class
   543  
   544  The storage class to use when storing objects in Google Cloud Storage.
   545  
   546  Properties:
   547  
   548  - Config:      storage_class
   549  - Env Var:     RCLONE_GCS_STORAGE_CLASS
   550  - Type:        string
   551  - Required:    false
   552  - Examples:
   553      - ""
   554          - Default
   555      - "MULTI_REGIONAL"
   556          - Multi-regional storage class
   557      - "REGIONAL"
   558          - Regional storage class
   559      - "NEARLINE"
   560          - Nearline storage class
   561      - "COLDLINE"
   562          - Coldline storage class
   563      - "ARCHIVE"
   564          - Archive storage class
   565      - "DURABLE_REDUCED_AVAILABILITY"
   566          - Durable reduced availability storage class
   567  
   568  #### --gcs-env-auth
   569  
   570  Get GCP IAM credentials from runtime (environment variables or instance meta data if no env vars).
   571  
   572  Only applies if service_account_file and service_account_credentials is blank.
   573  
   574  Properties:
   575  
   576  - Config:      env_auth
   577  - Env Var:     RCLONE_GCS_ENV_AUTH
   578  - Type:        bool
   579  - Default:     false
   580  - Examples:
   581      - "false"
   582          - Enter credentials in the next step.
   583      - "true"
   584          - Get GCP IAM credentials from the environment (env vars or IAM).
   585  
   586  ### Advanced options
   587  
   588  Here are the Advanced options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)).
   589  
   590  #### --gcs-token
   591  
   592  OAuth Access Token as a JSON blob.
   593  
   594  Properties:
   595  
   596  - Config:      token
   597  - Env Var:     RCLONE_GCS_TOKEN
   598  - Type:        string
   599  - Required:    false
   600  
   601  #### --gcs-auth-url
   602  
   603  Auth server URL.
   604  
   605  Leave blank to use the provider defaults.
   606  
   607  Properties:
   608  
   609  - Config:      auth_url
   610  - Env Var:     RCLONE_GCS_AUTH_URL
   611  - Type:        string
   612  - Required:    false
   613  
   614  #### --gcs-token-url
   615  
   616  Token server url.
   617  
   618  Leave blank to use the provider defaults.
   619  
   620  Properties:
   621  
   622  - Config:      token_url
   623  - Env Var:     RCLONE_GCS_TOKEN_URL
   624  - Type:        string
   625  - Required:    false
   626  
   627  #### --gcs-directory-markers
   628  
   629  Upload an empty object with a trailing slash when a new directory is created
   630  
   631  Empty folders are unsupported for bucket based remotes, this option creates an empty
   632  object ending with "/", to persist the folder.
   633  
   634  
   635  Properties:
   636  
   637  - Config:      directory_markers
   638  - Env Var:     RCLONE_GCS_DIRECTORY_MARKERS
   639  - Type:        bool
   640  - Default:     false
   641  
   642  #### --gcs-no-check-bucket
   643  
   644  If set, don't attempt to check the bucket exists or create it.
   645  
   646  This can be useful when trying to minimise the number of transactions
   647  rclone does if you know the bucket exists already.
   648  
   649  
   650  Properties:
   651  
   652  - Config:      no_check_bucket
   653  - Env Var:     RCLONE_GCS_NO_CHECK_BUCKET
   654  - Type:        bool
   655  - Default:     false
   656  
   657  #### --gcs-decompress
   658  
   659  If set this will decompress gzip encoded objects.
   660  
   661  It is possible to upload objects to GCS with "Content-Encoding: gzip"
   662  set. Normally rclone will download these files as compressed objects.
   663  
   664  If this flag is set then rclone will decompress these files with
   665  "Content-Encoding: gzip" as they are received. This means that rclone
   666  can't check the size and hash but the file contents will be decompressed.
   667  
   668  
   669  Properties:
   670  
   671  - Config:      decompress
   672  - Env Var:     RCLONE_GCS_DECOMPRESS
   673  - Type:        bool
   674  - Default:     false
   675  
   676  #### --gcs-endpoint
   677  
   678  Endpoint for the service.
   679  
   680  Leave blank normally.
   681  
   682  Properties:
   683  
   684  - Config:      endpoint
   685  - Env Var:     RCLONE_GCS_ENDPOINT
   686  - Type:        string
   687  - Required:    false
   688  
   689  #### --gcs-encoding
   690  
   691  The encoding for the backend.
   692  
   693  See the [encoding section in the overview](/overview/#encoding) for more info.
   694  
   695  Properties:
   696  
   697  - Config:      encoding
   698  - Env Var:     RCLONE_GCS_ENCODING
   699  - Type:        Encoding
   700  - Default:     Slash,CrLf,InvalidUtf8,Dot
   701  
   702  #### --gcs-description
   703  
   704  Description of the remote
   705  
   706  Properties:
   707  
   708  - Config:      description
   709  - Env Var:     RCLONE_GCS_DESCRIPTION
   710  - Type:        string
   711  - Required:    false
   712  
   713  {{< rem autogenerated options stop >}}
   714  
   715  ## Limitations
   716  
   717  `rclone about` is not supported by the Google Cloud Storage backend. Backends without
   718  this capability cannot determine free space for an rclone mount or
   719  use policy `mfs` (most free space) as a member of an rclone union
   720  remote.
   721  
   722  See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/)
   723