github.com/10XDev/rclone@v1.52.3-0.20200626220027-16af9ab76b2a/docs/content/googlecloudstorage.md (about)

     1  ---
     2  title: "Google Cloud Storage"
     3  description: "Rclone docs for Google Cloud Storage"
     4  ---
     5  
     6  {{< icon "fab fa-google" >}} Google Cloud Storage
     7  -------------------------------------------------
     8  
     9  Paths are specified as `remote:bucket` (or `remote:` for the `lsd`
    10  command.)  You may put subdirectories in too, eg `remote:bucket/path/to/dir`.
    11  
    12  The initial setup for google cloud storage involves getting a token from Google Cloud Storage
    13  which you need to do in your browser.  `rclone config` walks you
    14  through it.
    15  
    16  Here is an example of how to make a remote called `remote`.  First run:
    17  
    18       rclone config
    19  
    20  This will guide you through an interactive setup process:
    21  
    22  ```
    23  n) New remote
    24  d) Delete remote
    25  q) Quit config
    26  e/n/d/q> n
    27  name> remote
    28  Type of storage to configure.
    29  Choose a number from below, or type in your own value
    30  [snip]
    31  XX / Google Cloud Storage (this is not Google Drive)
    32     \ "google cloud storage"
    33  [snip]
    34  Storage> google cloud storage
    35  Google Application Client Id - leave blank normally.
    36  client_id>
    37  Google Application Client Secret - leave blank normally.
    38  client_secret>
    39  Project number optional - needed only for list/create/delete buckets - see your developer console.
    40  project_number> 12345678
    41  Service Account Credentials JSON file path - needed only if you want use SA instead of interactive login.
    42  service_account_file>
    43  Access Control List for new objects.
    44  Choose a number from below, or type in your own value
    45   1 / Object owner gets OWNER access, and all Authenticated Users get READER access.
    46     \ "authenticatedRead"
    47   2 / Object owner gets OWNER access, and project team owners get OWNER access.
    48     \ "bucketOwnerFullControl"
    49   3 / Object owner gets OWNER access, and project team owners get READER access.
    50     \ "bucketOwnerRead"
    51   4 / Object owner gets OWNER access [default if left blank].
    52     \ "private"
    53   5 / Object owner gets OWNER access, and project team members get access according to their roles.
    54     \ "projectPrivate"
    55   6 / Object owner gets OWNER access, and all Users get READER access.
    56     \ "publicRead"
    57  object_acl> 4
    58  Access Control List for new buckets.
    59  Choose a number from below, or type in your own value
    60   1 / Project team owners get OWNER access, and all Authenticated Users get READER access.
    61     \ "authenticatedRead"
    62   2 / Project team owners get OWNER access [default if left blank].
    63     \ "private"
    64   3 / Project team members get access according to their roles.
    65     \ "projectPrivate"
    66   4 / Project team owners get OWNER access, and all Users get READER access.
    67     \ "publicRead"
    68   5 / Project team owners get OWNER access, and all Users get WRITER access.
    69     \ "publicReadWrite"
    70  bucket_acl> 2
    71  Location for the newly created buckets.
    72  Choose a number from below, or type in your own value
    73   1 / Empty for default location (US).
    74     \ ""
    75   2 / Multi-regional location for Asia.
    76     \ "asia"
    77   3 / Multi-regional location for Europe.
    78     \ "eu"
    79   4 / Multi-regional location for United States.
    80     \ "us"
    81   5 / Taiwan.
    82     \ "asia-east1"
    83   6 / Tokyo.
    84     \ "asia-northeast1"
    85   7 / Singapore.
    86     \ "asia-southeast1"
    87   8 / Sydney.
    88     \ "australia-southeast1"
    89   9 / Belgium.
    90     \ "europe-west1"
    91  10 / London.
    92     \ "europe-west2"
    93  11 / Iowa.
    94     \ "us-central1"
    95  12 / South Carolina.
    96     \ "us-east1"
    97  13 / Northern Virginia.
    98     \ "us-east4"
    99  14 / Oregon.
   100     \ "us-west1"
   101  location> 12
   102  The storage class to use when storing objects in Google Cloud Storage.
   103  Choose a number from below, or type in your own value
   104   1 / Default
   105     \ ""
   106   2 / Multi-regional storage class
   107     \ "MULTI_REGIONAL"
   108   3 / Regional storage class
   109     \ "REGIONAL"
   110   4 / Nearline storage class
   111     \ "NEARLINE"
   112   5 / Coldline storage class
   113     \ "COLDLINE"
   114   6 / Durable reduced availability storage class
   115     \ "DURABLE_REDUCED_AVAILABILITY"
   116  storage_class> 5
   117  Remote config
   118  Use auto config?
   119   * Say Y if not sure
   120   * Say N if you are working on a remote or headless machine or Y didn't work
   121  y) Yes
   122  n) No
   123  y/n> y
   124  If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
   125  Log in and authorize rclone for access
   126  Waiting for code...
   127  Got code
   128  --------------------
   129  [remote]
   130  type = google cloud storage
   131  client_id =
   132  client_secret =
   133  token = {"AccessToken":"xxxx.xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"x/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx_xxxxxxxxx","Expiry":"2014-07-17T20:49:14.929208288+01:00","Extra":null}
   134  project_number = 12345678
   135  object_acl = private
   136  bucket_acl = private
   137  --------------------
   138  y) Yes this is OK
   139  e) Edit this remote
   140  d) Delete this remote
   141  y/e/d> y
   142  ```
   143  
   144  Note that rclone runs a webserver on your local machine to collect the
   145  token as returned from Google if you use auto config mode. This only
   146  runs from the moment it opens your browser to the moment you get back
   147  the verification code.  This is on `http://127.0.0.1:53682/` and this
   148  it may require you to unblock it temporarily if you are running a host
   149  firewall, or use manual mode.
   150  
   151  This remote is called `remote` and can now be used like this
   152  
   153  See all the buckets in your project
   154  
   155      rclone lsd remote:
   156  
   157  Make a new bucket
   158  
   159      rclone mkdir remote:bucket
   160  
   161  List the contents of a bucket
   162  
   163      rclone ls remote:bucket
   164  
   165  Sync `/home/local/directory` to the remote bucket, deleting any excess
   166  files in the bucket.
   167  
   168      rclone sync /home/local/directory remote:bucket
   169  
   170  ### Service Account support ###
   171  
   172  You can set up rclone with Google Cloud Storage in an unattended mode,
   173  i.e. not tied to a specific end-user Google account. This is useful
   174  when you want to synchronise files onto machines that don't have
   175  actively logged-in users, for example build machines.
   176  
   177  To get credentials for Google Cloud Platform
   178  [IAM Service Accounts](https://cloud.google.com/iam/docs/service-accounts),
   179  please head to the
   180  [Service Account](https://console.cloud.google.com/permissions/serviceaccounts)
   181  section of the Google Developer Console. Service Accounts behave just
   182  like normal `User` permissions in
   183  [Google Cloud Storage ACLs](https://cloud.google.com/storage/docs/access-control),
   184  so you can limit their access (e.g. make them read only). After
   185  creating an account, a JSON file containing the Service Account's
   186  credentials will be downloaded onto your machines. These credentials
   187  are what rclone will use for authentication.
   188  
   189  To use a Service Account instead of OAuth2 token flow, enter the path
   190  to your Service Account credentials at the `service_account_file`
   191  prompt and rclone won't use the browser based authentication
   192  flow. If you'd rather stuff the contents of the credentials file into
   193  the rclone config file, you can set `service_account_credentials` with
   194  the actual contents of the file instead, or set the equivalent
   195  environment variable.
   196  
   197  ### Application Default Credentials ###
   198  
   199  If no other source of credentials is provided, rclone will fall back
   200  to
   201  [Application Default Credentials](https://cloud.google.com/video-intelligence/docs/common/auth#authenticating_with_application_default_credentials)
   202  this is useful both when you already have configured authentication
   203  for your developer account, or in production when running on a google
   204  compute host. Note that if running in docker, you may need to run
   205  additional commands on your google compute machine -
   206  [see this page](https://cloud.google.com/container-registry/docs/advanced-authentication#gcloud_as_a_docker_credential_helper).
   207  
   208  Note that in the case application default credentials are used, there
   209  is no need to explicitly configure a project number.
   210  
   211  ### --fast-list ###
   212  
   213  This remote supports `--fast-list` which allows you to use fewer
   214  transactions in exchange for more memory. See the [rclone
   215  docs](/docs/#fast-list) for more details.
   216  
   217  ### Custom upload headers ###
   218  
   219  You can set custom upload headers with the `--header-upload`
   220  flag. Google Cloud Storage supports the headers as described in the
   221  [working with metadata documentation](https://cloud.google.com/storage/docs/gsutil/addlhelp/WorkingWithObjectMetadata)
   222  
   223  - Cache-Control
   224  - Content-Disposition
   225  - Content-Encoding
   226  - Content-Language
   227  - Content-Type
   228  - X-Goog-Meta-
   229  
   230  Eg `--header-upload "Content-Type text/potato"`
   231  
   232  Note that the last of these is for setting custom metadata in the form
   233  `--header-upload "x-goog-meta-key: value"`
   234  
   235  ### Modified time ###
   236  
   237  Google google cloud storage stores md5sums natively and rclone stores
   238  modification times as metadata on the object, under the "mtime" key in
   239  RFC3339 format accurate to 1ns.
   240  
   241  #### Restricted filename characters
   242  
   243  | Character | Value | Replacement |
   244  | --------- |:-----:|:-----------:|
   245  | NUL       | 0x00  | ␀           |
   246  | LF        | 0x0A  | ␊           |
   247  | CR        | 0x0D  | ␍           |
   248  | /         | 0x2F  | /          |
   249  
   250  Invalid UTF-8 bytes will also be [replaced](/overview/#invalid-utf8),
   251  as they can't be used in JSON strings.
   252  
   253  {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/googlecloudstorage/googlecloudstorage.go then run make backenddocs" >}}
   254  ### Standard Options
   255  
   256  Here are the standard options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)).
   257  
   258  #### --gcs-client-id
   259  
   260  Google Application Client Id
   261  Leave blank normally.
   262  
   263  - Config:      client_id
   264  - Env Var:     RCLONE_GCS_CLIENT_ID
   265  - Type:        string
   266  - Default:     ""
   267  
   268  #### --gcs-client-secret
   269  
   270  Google Application Client Secret
   271  Leave blank normally.
   272  
   273  - Config:      client_secret
   274  - Env Var:     RCLONE_GCS_CLIENT_SECRET
   275  - Type:        string
   276  - Default:     ""
   277  
   278  #### --gcs-project-number
   279  
   280  Project number.
   281  Optional - needed only for list/create/delete buckets - see your developer console.
   282  
   283  - Config:      project_number
   284  - Env Var:     RCLONE_GCS_PROJECT_NUMBER
   285  - Type:        string
   286  - Default:     ""
   287  
   288  #### --gcs-service-account-file
   289  
   290  Service Account Credentials JSON file path
   291  Leave blank normally.
   292  Needed only if you want use SA instead of interactive login.
   293  
   294  - Config:      service_account_file
   295  - Env Var:     RCLONE_GCS_SERVICE_ACCOUNT_FILE
   296  - Type:        string
   297  - Default:     ""
   298  
   299  #### --gcs-service-account-credentials
   300  
   301  Service Account Credentials JSON blob
   302  Leave blank normally.
   303  Needed only if you want use SA instead of interactive login.
   304  
   305  - Config:      service_account_credentials
   306  - Env Var:     RCLONE_GCS_SERVICE_ACCOUNT_CREDENTIALS
   307  - Type:        string
   308  - Default:     ""
   309  
   310  #### --gcs-object-acl
   311  
   312  Access Control List for new objects.
   313  
   314  - Config:      object_acl
   315  - Env Var:     RCLONE_GCS_OBJECT_ACL
   316  - Type:        string
   317  - Default:     ""
   318  - Examples:
   319      - "authenticatedRead"
   320          - Object owner gets OWNER access, and all Authenticated Users get READER access.
   321      - "bucketOwnerFullControl"
   322          - Object owner gets OWNER access, and project team owners get OWNER access.
   323      - "bucketOwnerRead"
   324          - Object owner gets OWNER access, and project team owners get READER access.
   325      - "private"
   326          - Object owner gets OWNER access [default if left blank].
   327      - "projectPrivate"
   328          - Object owner gets OWNER access, and project team members get access according to their roles.
   329      - "publicRead"
   330          - Object owner gets OWNER access, and all Users get READER access.
   331  
   332  #### --gcs-bucket-acl
   333  
   334  Access Control List for new buckets.
   335  
   336  - Config:      bucket_acl
   337  - Env Var:     RCLONE_GCS_BUCKET_ACL
   338  - Type:        string
   339  - Default:     ""
   340  - Examples:
   341      - "authenticatedRead"
   342          - Project team owners get OWNER access, and all Authenticated Users get READER access.
   343      - "private"
   344          - Project team owners get OWNER access [default if left blank].
   345      - "projectPrivate"
   346          - Project team members get access according to their roles.
   347      - "publicRead"
   348          - Project team owners get OWNER access, and all Users get READER access.
   349      - "publicReadWrite"
   350          - Project team owners get OWNER access, and all Users get WRITER access.
   351  
   352  #### --gcs-bucket-policy-only
   353  
   354  Access checks should use bucket-level IAM policies.
   355  
   356  If you want to upload objects to a bucket with Bucket Policy Only set
   357  then you will need to set this.
   358  
   359  When it is set, rclone:
   360  
   361  - ignores ACLs set on buckets
   362  - ignores ACLs set on objects
   363  - creates buckets with Bucket Policy Only set
   364  
   365  Docs: https://cloud.google.com/storage/docs/bucket-policy-only
   366  
   367  
   368  - Config:      bucket_policy_only
   369  - Env Var:     RCLONE_GCS_BUCKET_POLICY_ONLY
   370  - Type:        bool
   371  - Default:     false
   372  
   373  #### --gcs-location
   374  
   375  Location for the newly created buckets.
   376  
   377  - Config:      location
   378  - Env Var:     RCLONE_GCS_LOCATION
   379  - Type:        string
   380  - Default:     ""
   381  - Examples:
   382      - ""
   383          - Empty for default location (US).
   384      - "asia"
   385          - Multi-regional location for Asia.
   386      - "eu"
   387          - Multi-regional location for Europe.
   388      - "us"
   389          - Multi-regional location for United States.
   390      - "asia-east1"
   391          - Taiwan.
   392      - "asia-east2"
   393          - Hong Kong.
   394      - "asia-northeast1"
   395          - Tokyo.
   396      - "asia-south1"
   397          - Mumbai.
   398      - "asia-southeast1"
   399          - Singapore.
   400      - "australia-southeast1"
   401          - Sydney.
   402      - "europe-north1"
   403          - Finland.
   404      - "europe-west1"
   405          - Belgium.
   406      - "europe-west2"
   407          - London.
   408      - "europe-west3"
   409          - Frankfurt.
   410      - "europe-west4"
   411          - Netherlands.
   412      - "us-central1"
   413          - Iowa.
   414      - "us-east1"
   415          - South Carolina.
   416      - "us-east4"
   417          - Northern Virginia.
   418      - "us-west1"
   419          - Oregon.
   420      - "us-west2"
   421          - California.
   422  
   423  #### --gcs-storage-class
   424  
   425  The storage class to use when storing objects in Google Cloud Storage.
   426  
   427  - Config:      storage_class
   428  - Env Var:     RCLONE_GCS_STORAGE_CLASS
   429  - Type:        string
   430  - Default:     ""
   431  - Examples:
   432      - ""
   433          - Default
   434      - "MULTI_REGIONAL"
   435          - Multi-regional storage class
   436      - "REGIONAL"
   437          - Regional storage class
   438      - "NEARLINE"
   439          - Nearline storage class
   440      - "COLDLINE"
   441          - Coldline storage class
   442      - "ARCHIVE"
   443          - Archive storage class
   444      - "DURABLE_REDUCED_AVAILABILITY"
   445          - Durable reduced availability storage class
   446  
   447  ### Advanced Options
   448  
   449  Here are the advanced options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)).
   450  
   451  #### --gcs-encoding
   452  
   453  This sets the encoding for the backend.
   454  
   455  See: the [encoding section in the overview](/overview/#encoding) for more info.
   456  
   457  - Config:      encoding
   458  - Env Var:     RCLONE_GCS_ENCODING
   459  - Type:        MultiEncoder
   460  - Default:     Slash,CrLf,InvalidUtf8,Dot
   461  
   462  {{< rem autogenerated options stop >}}