github.com/rclone/rclone@v1.66.1-0.20240517100346-7b89735ae726/cmd/serve/s3/serve_s3.md (about)

     1  `serve s3` implements a basic s3 server that serves a remote via s3.
     2  This can be viewed with an s3 client, or you can make an [s3 type
     3  remote](/s3/) to read and write to it with rclone.
     4  
     5  `serve s3` is considered **Experimental** so use with care.
     6  
     7  S3 server supports Signature Version 4 authentication. Just use
     8  `--auth-key accessKey,secretKey` and set the `Authorization`
     9  header correctly in the request. (See the [AWS
    10  docs](https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html)).
    11  
    12  `--auth-key` can be repeated for multiple auth pairs. If
    13  `--auth-key` is not provided then `serve s3` will allow anonymous
    14  access.
    15  
    16  Please note that some clients may require HTTPS endpoints. See [the
    17  SSL docs](#ssl-tls) for more information.
    18  
    19  This command uses the [VFS directory cache](#vfs-virtual-file-system).
    20  All the functionality will work with `--vfs-cache-mode off`. Using
    21  `--vfs-cache-mode full` (or `writes`) can be used to cache objects
    22  locally to improve performance.
    23  
    24  Use `--force-path-style=false` if you want to use the bucket name as a
    25  part of the hostname (such as mybucket.local)
    26  
    27  Use `--etag-hash` if you want to change the hash uses for the `ETag`.
    28  Note that using anything other than `MD5` (the default) is likely to
    29  cause problems for S3 clients which rely on the Etag being the MD5.
    30  
    31  ### Quickstart
    32  
    33  For a simple set up, to serve `remote:path` over s3, run the server
    34  like this:
    35  
    36  ```
    37  rclone serve s3 --auth-key ACCESS_KEY_ID,SECRET_ACCESS_KEY remote:path
    38  ```
    39  
    40  For example, to use a simple folder in the filesystem, run the server
    41  with a command like this:
    42  
    43  ```
    44  rclone serve s3 --auth-key ACCESS_KEY_ID,SECRET_ACCESS_KEY local:/path/to/folder
    45  ```
    46  
    47  The `rclone.conf` for the server could look like this:
    48  
    49  ```
    50  [local]
    51  type = local
    52  ```
    53  
    54  The `local` configuration is optional though. If you run the server with a
    55  `remote:path` like `/path/to/folder` (without the `local:` prefix and without an
    56  `rclone.conf` file), rclone will fall back to a default configuration, which
    57  will be visible as a warning in the logs. But it will run nonetheless.
    58  
    59  This will be compatible with an rclone (client) remote configuration which
    60  is defined like this:
    61  
    62  ```
    63  [serves3]
    64  type = s3
    65  provider = Rclone
    66  endpoint = http://127.0.0.1:8080/
    67  access_key_id = ACCESS_KEY_ID
    68  secret_access_key = SECRET_ACCESS_KEY
    69  use_multipart_uploads = false
    70  ```
    71  
    72  Note that setting `disable_multipart_uploads = true` is to work around
    73  [a bug](#bugs) which will be fixed in due course.
    74  
    75  ### Bugs
    76  
    77  When uploading multipart files `serve s3` holds all the parts in
    78  memory (see [#7453](https://github.com/rclone/rclone/issues/7453)).
    79  This is a limitaton of the library rclone uses for serving S3 and will
    80  hopefully be fixed at some point.
    81  
    82  Multipart server side copies do not work (see
    83  [#7454](https://github.com/rclone/rclone/issues/7454)). These take a
    84  very long time and eventually fail. The default threshold for
    85  multipart server side copies is 5G which is the maximum it can be, so
    86  files above this side will fail to be server side copied.
    87  
    88  For a current list of `serve s3` bugs see the [serve
    89  s3](https://github.com/rclone/rclone/labels/serve%20s3) bug category
    90  on GitHub.
    91  
    92  ### Limitations
    93  
    94  `serve s3` will treat all directories in the root as buckets and
    95  ignore all files in the root. You can use `CreateBucket` to create
    96  folders under the root, but you can't create empty folders under other
    97  folders not in the root.
    98  
    99  When using `PutObject` or `DeleteObject`, rclone will automatically
   100  create or clean up empty folders. If you don't want to clean up empty
   101  folders automatically, use `--no-cleanup`.
   102  
   103  When using `ListObjects`, rclone will use `/` when the delimiter is
   104  empty. This reduces backend requests with no effect on most
   105  operations, but if the delimiter is something other than `/` and
   106  empty, rclone will do a full recursive search of the backend, which
   107  can take some time.
   108  
   109  Versioning is not currently supported.
   110  
   111  Metadata will only be saved in memory other than the rclone `mtime`
   112  metadata which will be set as the modification time of the file.
   113  
   114  ### Supported operations
   115  
   116  `serve s3` currently supports the following operations.
   117  
   118  - Bucket
   119      - `ListBuckets`
   120      - `CreateBucket`
   121      - `DeleteBucket`
   122  - Object
   123      - `HeadObject`
   124      - `ListObjects`
   125      - `GetObject`
   126      - `PutObject`
   127      - `DeleteObject`
   128      - `DeleteObjects`
   129      - `CreateMultipartUpload`
   130      - `CompleteMultipartUpload`
   131      - `AbortMultipartUpload`
   132      - `CopyObject`
   133      - `UploadPart`
   134  
   135  Other operations will return error `Unimplemented`.