github.com/NVIDIA/aistore@v1.3.23-0.20240517131212-7df6609be51d/docs/s3compat.md (about)

     1  ---
     2  layout: post
     3  title: S3COMPAT
     4  permalink: /docs/s3compat
     5  redirect_from:
     6   - /s3compat.md/
     7   - /docs/s3compat.md/
     8  ---
     9  
    10  AIS supports Amazon S3 in 3 (three) distinct and different ways:
    11  
    12  1. On the back, via [backend](providers.md) abstraction. Specifically for the S3 [backend](providers.md), the implementation currently utilizes [AWS SDK for Go v2](https://github.com/aws/aws-sdk-go-v2).
    13  2. On the client-facing front, AIS provides S3 compatible API, so that existing S3 applications could use AIStore out of the box and without the need to change their (existing) code.
    14  3. Similar to the option 2. but instead of instantiating, signing, and issuing requests to S3, AIS executes already signed S3 request ([presigned URLs](https://docs.aws.amazon.com/search/doc-search.html?searchPath=documentation-guide&searchQuery=presigned&this_doc_product=Amazon%20Simple%20Storage%20Service&this_doc_guide=User%20Guide). Elsewhere in the documentation and the source, we refer to this mechanism as a _pass-through_.
    15  
    16  This document talks about the 2. and 3. - about AIS providing S3 compatible API to clients and apps.
    17  
    18  There's a separate, albeit closely related, [document](/docs/s3cmd.md) that explains how to configure `s3cmd` and then maybe tweak AIStore configuration to work with it:
    19  
    20  * [Getting Started with `s3cmd`](/docs/s3cmd.md) - also contains configuration, tips, usage examples, and more.
    21  
    22  For additional background, see:
    23  
    24  * [High-level AIS block diagram](overview.md#at-a-glance) that shows frontend and backend APIs and capabilities.
    25  * [Setting custom S3 endpoint](/docs/cli/bucket.md) can come in handy when a bucket is hosted by an S3 compliant backend (such as, e.g., minio).
    26  
    27  ## Table of Contents
    28  
    29  - [Quick example using `aws` CLI](#quick-example-using-aws-cli)
    30    - [PUT(object)](#putobject)
    31    - [GET(object)](#getobject)
    32    - [HEAD(object)](#headobject)
    33  - [Presigned S3 requests](#presigned-s3-requests)
    34  - [Quick example using Internet Browser](#quick-example-using-internet-browser)
    35  - [`s3cmd` command line](#s3cmd-command-line)
    36  - [ETag and MD5](#etag-and-md5)
    37  - [Last Modification Time](#last-modification-time)
    38  - [Multipart Upload using `aws`](#multipart-upload-using-aws)
    39  - [More Usage Examples](#more-usage-examples)
    40    - [Create bucket](#create-bucket)
    41    - [Remove bucket](#remove-bucket)
    42    - [Upload large object](#upload-large-object)
    43  - [TensorFlow Demo](#tensorflow-demo)
    44  - [S3 Compatibility](#s3-compatibility)
    45    - [Supported S3](#supported-s3)
    46    - [Unsupported S3](#unsupported-s3)
    47  - [Boto3 Compatibility](#boto3-compatibility)
    48  - [Amazon CLI tools](#amazon-cli-tools)
    49  
    50  ## Quick example using `aws` CLI
    51  
    52  The following was tested with an _older_ version of `aws` CLI, namely:
    53  
    54  ```console
    55  $ aws --version
    56  aws-cli/1.15.58 Python/3.5.2 Linux/5.4.0-124-generic botocore/1.10.57
    57  ```
    58  
    59  You can create buckets and execute PUT/GET verbs, etc.
    60  
    61  ```console
    62  $ aws --endpoint-url http://localhost:8080/s3 s3 mb s3://abc
    63  make_bucket: abc
    64  ```
    65  
    66  ### PUT(object)
    67  
    68  ```console
    69  # PUT using AIS CLI:
    70  $ ais put README.md ais://abc
    71  
    72  # The same via `aws` CLI:
    73  $ aws --endpoint-url http://localhost:8080/s3 s3api put-object --bucket abc --key LICENSE --body LICENSE
    74  $ ais ls ais://abc
    75  NAME             SIZE
    76  LICENSE          1.05KiB
    77  README.md        10.44KiB
    78  ```
    79  
    80  ### GET(object)
    81  
    82  ```console
    83  # GET using `aws` CLI:
    84  $ aws --endpoint-url http://localhost:8080/s3 s3api get-object --bucket abc --key README.md /tmp/readme
    85  {
    86      "ContentType": "text/plain; charset=utf-8",
    87      "Metadata": {},
    88      "ContentLength": 10689
    89  }
    90  $ diff -uN README.md /tmp/readme
    91  ```
    92  
    93  ### HEAD(object)
    94  
    95  ```console
    96  # Get object metadata using `aws` CLI:
    97  $ aws s3api --endpoint-url http://localhost:8080/s3 head-object --bucket abc --key LICENSE
    98  {
    99      "Metadata": {},
   100      "ContentLength": 1075,
   101      "ETag": "f70a21a0c5fa26a93820b0bef5be7619",
   102      "LastModified": "Mon, 19 Dec 2022 22:23:05 GMT"
   103  }
   104  ```
   105  
   106  ## Presigned S3 requests
   107  
   108  AIStore also supports (passing through) [presigned S3 requests](https://docs.aws.amazon.com/search/doc-search.html?searchPath=documentation-guide&searchQuery=presigned&this_doc_product=Amazon%20Simple%20Storage%20Service&this_doc_guide=User%20Guide).
   109  
   110  To use this _feature_, you need to enable it - as follows:
   111  
   112  ```commandline
   113  $ ais config cluster features S3-Presigned-Request
   114  ```
   115  
   116  Once we have our cluster configured we can prepare and issue presigned S3 request:
   117  1. First create a signed S3 request.
   118     ```commandline
   119     $ aws s3 presign s3://bucket/test.txt
   120     https://bucket.s3.us-west-2.amazonaws.com/test.txt?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAEXAMPLE123456789%2F20210621%2Fus-west-2%2Fs3%2Faws4_request&X-Amz-Date=20210621T041609Z&X-Amz-Expires=3600&X-Amz-SignedHeaders=host&X-Amz-Signature=EXAMBLE1234494d5fba3fed607f98018e1dfc62e2529ae96d844123456
   121     ```
   122  
   123  2. Issue request against AIStore:
   124     ```commandline
   125     $ curl -L -X -d 'testing 1 2 3' PUT https://localhost:8080/s3/bucket/test.txt?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAEXAMPLE123456789%2F20210621%2Fus-west-2%2Fs3%2Faws4_request&X-Amz-Date=20210621T041609Z&X-Amz-Expires=3600&X-Amz-SignedHeaders=host&X-Amz-Signature=EXAMBLE1234494d5fba3fed607f98018e1dfc62e2529ae96d844123456
   126     ```
   127     At this point, AIStore will send the presigned (PUT) URL to S3 and, if successful, store the object in cluster.
   128  
   129  3. Check status of the object:
   130     ```commandline
   131     ais bucket ls s3://bucket
   132     NAME          SIZE   CACHED  STATUS
   133     test.txt      13B    yes     ok
   134     ```
   135  
   136  It is also possible to achieve the same using a Go client. You will need to define a custom `RoundTripper` that changes URL from S3 to AIStore, e.g.:
   137  
   138  ```go
   139  type customTransport struct {
   140  	rt http.RoundTripper
   141  }
   142  
   143  func (t *customTransport) RoundTrip(req *http.Request) (*http.Response, error) {
   144  	bucket := strings.Split(req.URL.Host, ".")[0]
   145  	req.URL.Host = "localhost:8080" // <--- CHANGE THIS.
   146  	req.URL.Path = "/s3/" + bucket + req.URL.Path
   147  	return t.rt.RoundTrip(req)
   148  }
   149  
   150  ...
   151  
   152  func main() {
   153  	customClient := &http.Client{...}
   154  	s3Client := s3.New(s3.Options{HTTPClient: customClient})
   155  	getOutput, err := s3Client.GetObject(context.Background(), &s3.GetObjectInput{
   156  		Bucket: aws.String("bucket"),
   157  		Key:    aws.String("test.txt"),
   158  	})
   159  	...
   160  }
   161  ```
   162  
   163  ## Quick example using Internet Browser
   164  
   165  AIStore gateways provide HTTP/HTTPS interface, which is also why it is maybe sometimes convenient (and very fast) to use your Browser to execute `GET` type queries.
   166  
   167  Specifically - since in this document we are talking about s3-compatible API - here's an example that utilizes `/s3` endpoint to list all buckets:
   168  
   169  ```xml
   170  <ListBucketResult xmlns="http://s3.amazonaws.com/doc/2006-03-01">
   171    <Owner>
   172       <ID>1</ID>
   173       <DisplayName>ListAllMyBucketsResult</DisplayName>
   174    </Owner>
   175  <Buckets>
   176    <Bucket>
   177      <Name>bucket-111</Name>
   178      <CreationDate>2022-08-23T09:16:40-04:00</CreationDate>
   179      <String>Provider: ais</String>
   180    </Bucket>
   181    <Bucket>
   182      <Name>bucket-222</Name>
   183      <CreationDate>2022-08-23T13:47:00-04:00</CreationDate>
   184      <String>Provider: ais</String>
   185    </Bucket>
   186    <Bucket>
   187    <Name>bucket-222</Name>
   188      <CreationDate>2022-08-23T13:21:21-04:00</CreationDate>
   189      <String>Provider: aws (WARNING: {bucket-222, Provider: ais} and {bucket-222, Provider: aws} share the same name)</String>
   190    </Bucket>
   191    <Bucket>
   192      <Name>bucket-333</Name>
   193      <CreationDate>2022-08-23T13:26:32-04:00</CreationDate>
   194      <String>Provider: gcp</String>
   195    </Bucket>
   196  </Buckets>
   197  </ListBucketResult>
   198  ```
   199  
   200  > Notice the "sharing the same name" warning above. For background, please refer to [backend providers](/docs/providers.md).
   201  
   202  > In re `/s3 endpoint` mentioned above, the corresponding request URL in the browser's address bar would look something like `ais-gateway-host:port/s3`.
   203  
   204  ## `s3cmd` command line
   205  
   206  The following table enumerates some of the `s3cmd` options that may appear to be useful:
   207  
   208  | Options | Usage | Example |
   209  | --- | --- | --- |
   210  | `--host` | Define an AIS cluster endpoint | `--host=10.10.0.1:51080/s3` |
   211  | `--host-bucket` | Define URL path to access a bucket of an AIS cluster | `--host-bucket="10.10.0.1:51080/s3/%(bucket)"` |
   212  | `--no-ssl` | Use HTTP instead of HTTPS | |
   213  | `--no-check-certificate` | Disable checking server's certificate in case of self-signed ones | |
   214  | `--region` | Define a bucket region | `--region=us-west-1` |
   215  
   216  ## ETag and MD5
   217  
   218  When you are reading an object from Amazon S3, the response will contain [ETag](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/ETag).
   219  
   220  Amazon S3 ETag is the object's checksum. Amazon computes those checksums using `md5`.
   221  
   222  On the other hand, the default checksum type that AIS uses is [xxhash](http://cyan4973.github.io/xxHash/).
   223  
   224  Therefore, it is advisable to:
   225  
   226  1. keep in mind this dichotomy, and
   227  2. possibly, configure AIS bucket in question with `md5`.
   228  
   229  Here's a simple scenario:
   230  
   231  Say, an S3-based client performs a GET or a PUT operation and calculates `md5` of an object that's being GET (or PUT). When the operation finishes, the client then compares the checksum with the `ETag` value in the response header. If checksums differ, the client raises the error "MD5 sum mismatch."
   232  
   233  To enable MD5 checksum at bucket creation time:
   234  
   235  ```console
   236  $ ais create ais://bck --props="checksum.type=md5"
   237  "ais://bck2" bucket created
   238  
   239  $ ais show bucket ais://bck | grep checksum
   240  checksum         Type: md5 | Validate: ColdGET
   241  ```
   242  
   243  Or, you can change bucket's checksum type at any later time:
   244  
   245  ```console
   246  $ ais bucket props ais://bck checksum.type=md5
   247  Bucket props successfully updated
   248  "checksum.type" set to:"md5" (was:"xxhash")
   249  ```
   250  
   251  Please note that changing the bucket's checksum does not trigger updating (existing) checksums of *existing* objects - only new writes will be checksummed with the newly configured checksum.
   252  
   253  ## Last Modification Time
   254  
   255  AIS tracks object last *access* time and returns it as `LastModified` for S3 clients. If an object has never been accessed, which can happen when AIS bucket uses a Cloud bucket as a backend one, zero Unix time is returned.
   256  
   257  Example when access time is undefined (not set):
   258  
   259  ```console
   260  # create AIS bucket with AWS backend bucket (for supported backends and details see docs/providers.md)
   261  $ ais create ais://bck
   262  $ ais bucket props ais://bck backend_bck=aws://bckaws
   263  $ ais bucket props ais://bck checksum.type=md5
   264  
   265  # put an object using native ais API and note access time (same as creation time in this case)
   266  $ ais put object.txt ais://bck/obj-ais
   267  
   268  # put object with s3cmd - the request bypasses ais, so no access time in the `ls` results
   269  $ s3cmd put object.txt s3://bck/obj-aws --host=localhost:51080 --host-bucket="localhost:51080/s3/%(bucket)"
   270  
   271  $ ais ls ais://bck --props checksum,size,atime
   272  NAME            CHECKSUM                                SIZE            ATIME
   273  obj-ais         a103a20a4e8a207fe7ba25eeb2634c96        69.99KiB        08 Dec 20 11:25 PST
   274  obj-aws         a103a20a4e8a207fe7ba25eeb2634c96        69.99KiB
   275  
   276  $ s3cmd ls s3://bck --host=localhost:51080 --host-bucket="localhost:51080/s3/%(bucket)"
   277  2020-12-08 11:25     71671   s3://test/obj-ais
   278  1969-12-31 16:00     71671   s3://test/obj-aws
   279  ```
   280  
   281  > See related: [multipart upload](https://github.com/NVIDIA/aistore/blob/main/ais/test/scripts/s3-mpt-large-files.sh) test and usage comments inline.
   282  
   283  ## Multipart Upload using `aws`
   284  
   285  Example below reproduces the following [Amazon Knowledge-Center instruction](https://aws.amazon.com/premiumsupport/knowledge-center/s3-multipart-upload-cli/).
   286  
   287  > Used `aws-cli/1.15.58 Python/3.5.2 Linux/5.15.0-46-generic botocore/1.10.57`
   288  
   289  > Compare with (user-friendly and easy-to-execute) multipart examples from the [s3cmd companion doc](/docs/s3cmd.md).
   290  
   291  But first and separately, we create `ais://` bucket and configure it with MD5:
   292  
   293  ```console
   294  $ ais create ais://abc
   295  "ais://abc" created (see https://github.com/NVIDIA/aistore/blob/main/docs/bucket.md#default-bucket-properties)
   296  $ ais bucket props set ais://abc checksum.type=md5
   297  Bucket props successfully updated
   298  "checksum.type" set to: "md5" (was: "xxhash")
   299  ```
   300  
   301  Next, the uploading sequence:
   302  
   303  ```console
   304  # 1. initiate multipart upload
   305  $ aws s3api create-multipart-upload --bucket abc --key large-test-file --endpoint-url http://localhost:8080/s3                                   {
   306      "Key": "large-test-file",
   307      "UploadId": "uu3DuXsJG",
   308      "Bucket": "abc"
   309  }
   310  ```
   311  
   312  ```console
   313  # 2. upload the first part (w/ upload-id copied from the previous command)
   314  $ aws s3api upload-part --endpoint-url http://localhost:8080/s3 --bucket abc --key large-test-file --part-number 1 --body README.md --upload-id uu3DuXsJG
   315  {
   316      "ETag": "9bc8111718e22a34f9fa6a099da1f3df"
   317  }
   318  ```
   319  
   320  ```console
   321  # 3. upload the second, etc. parts
   322  $ aws s3api upload-part --endpoint-url http://localhost:8080/s3 --bucket abc --key large-test-file --part-number 2 --body LICENSE --upload-id uu3DuXsJG
   323  {
   324      "ETag": "f70a21a0c5fa26a93820b0bef5be7619"
   325  }
   326  ```
   327  
   328  ```console
   329  # 4. list active upload by its ID (upload-id)
   330  $ aws s3api list-parts --endpoint-url http://localhost:8080/s3 --bucket abc --key large-test-file --upload-id uu3DuXsJG                          {
   331      "Owner": null,
   332      "StorageClass": null,
   333      "Initiator": null,
   334      "Parts": [
   335          {
   336              "PartNumber": 1,
   337              "ETag": "9bc8111718e22a34f9fa6a099da1f3df",
   338              "Size": 10725
   339          },
   340          {
   341              "PartNumber": 2,
   342              "ETag": "f70a21a0c5fa26a93820b0bef5be7619",
   343              "Size": 1075
   344          }
   345      ]
   346  }
   347  ```
   348  
   349  And finally:
   350  
   351  ```console
   352  # 5. complete upload, and be done
   353  $ aws s3api complete-multipart-upload --endpoint-url http://localhost:8080/s3 --bucket abc --key large-test-file --multipart-upload file://up.json  --upload-id uu3DuXsJG
   354  {
   355      "Key": "large-test-file",
   356      "Bucket": "abc",
   357      "ETag": "799e69a43a00794a86eebffb5fbaf4e6-2"
   358  }
   359  $ s3cmd ls s3://abc
   360  2022-08-31 20:36        11800  s3://abc/large-test-file
   361  ```
   362  
   363  Notice `file://up.json` in the `complete-multipart-upload` command. It simply contains the "Parts" section(**) copied from the "list active upload" step (above).
   364  
   365  > (**) with no sizes
   366  
   367  See https://aws.amazon.com/premiumsupport/knowledge-center/s3-multipart-upload-cli for details.
   368  
   369  
   370  ## More Usage Examples
   371  
   372  Use any S3 client to access an AIS bucket. Examples below use standard AWS CLI. To access an AIS bucket, one has to pass the correct `endpoint` to the client. The endpoint is the primary proxy URL and `/s3` path, e.g, `http://10.0.0.20:51080/s3`.
   373  
   374  ### Create bucket
   375  
   376  ```shell
   377  # check that AIS cluster has no buckets, and create a new one
   378  $ ais ls ais://
   379  AIS Buckets (0)
   380  $ s3cmd --host http://localhost:51080/s3 s3 mb s3://bck1
   381  make_bucket: bck1
   382  
   383  # list buckets via native CLI
   384  $ ais ls ais://
   385  AIS Buckets (1)
   386  ```
   387  
   388  ### Remove bucket
   389  
   390  ```shell
   391  $ s3cmd --host http://localhost:51080/s3 s3 ls s3://
   392  2020-04-21 16:21:08 bck1
   393  
   394  $ s3cmd --host http://localhost:51080/s3 s3 mb s3://bck1
   395  remove_bucket: aws1
   396  $ s3cmd --host http://localhost:51080/s3 s3 ls s3://
   397  ```
   398  
   399  ### Upload large object
   400  
   401  In this section, we use all 3 (three) clients:
   402  
   403  1. `s3cmd` client pre-configured to communicate with (and via) AIS
   404  2. `aws` CLI that sends requests directly to AWS S3 standard endpoint (with no AIS in-between)
   405  3. and, finally, native AIS CLI
   406  
   407  ```shell
   408  # 1. Upload via `s3cmd` => `aistore`
   409  
   410  $ s3cmd put $(which aisnode) s3://abc --multipart-chunk-size-mb=8
   411  upload: 'bin/aisnode' -> 's3://abc/aisnode'  [part 1 of 10, 8MB] [1 of 1]
   412   8388608 of 8388608   100% in    0s   233.84 MB/s  done
   413  ...
   414   8388608 of 8388608   100% in    0s   234.19 MB/s  done
   415  upload: 'bin/aisnode' -> 's3://abc/aisnode'  [part 10 of 10, 5MB] [1 of 1]
   416   5975140 of 5975140   100% in    0s   233.39 MB/s  done
   417  ```
   418  
   419  ```shell
   420  # 2. View object metadata via native CLI
   421  $ ais show object s3://abc/aisnode --all
   422  PROPERTY         VALUE
   423  atime            30 Aug 54 17:47 LMT
   424  cached           yes
   425  checksum         md5[a38030ea13e1b59c...]
   426  copies           1 [/tmp/ais/mp3/11]
   427  custom           map[ETag:"e3be082db698af7c15b0502f6a88265d-16" source:aws version:3QEKSH7LowuRB2OnUHjWCFsp58aZpsC2]
   428  ec               -
   429  location         t[MKpt8091]:mp[/tmp/ais/mp3/11, nvme0n1]
   430  name             s3://abc/aisnode
   431  size             77.70MiB
   432  version          3QEKSH7LowuRB2OnUHjWCFsp58aZpsC2
   433  ```
   434  
   435  ```shell
   436  # 3. View object metadata via `aws` CLI => directly to AWS (w/ no aistore in-between):
   437  $ aws s3api head-object --bucket abc --key aisnode
   438  {
   439      "LastModified": "Tue, 20 Dec 2022 17:43:16 GMT",
   440      "ContentLength": 81472612,
   441      "Metadata": {
   442          "x-amz-meta-ais-cksum-type": "md5",
   443          "x-amz-meta-ais-cksum-val": "a38030ea13e1b59c529e888426001eed"
   444      },
   445      "ETag": "\"e3be082db698af7c15b0502f6a88265d-16\"",
   446      "AcceptRanges": "bytes",
   447      "ContentType": "binary/octet-stream",
   448      "VersionId": "3QEKSH7LowuRB2OnUHjWCFsp58aZpsC2"
   449  }
   450  ```
   451  
   452  ```shell
   453  # 4. Finally, view object metadata via `s3cmd` => `aistore`
   454  $ s3cmd info s3://abc/aisnode
   455  s3://abc/aisnode (object):
   456     File size: 81472612
   457     Last mod:  Fri, 30 Aug 1754 22:43:41 GMT
   458     MIME type: none
   459     Storage:   STANDARD
   460     MD5 sum:   a38030ea13e1b59c529e888426001eed
   461     SSE:       none
   462     Policy:    none
   463     CORS:      none
   464     ACL:       none
   465  ```
   466  
   467  ## TensorFlow Demo
   468  
   469  Setup `S3_ENDPOINT` and `S3_USE_HTTPS` environment variables prior to running a TensorFlow job. `S3_ENDPOINT` must be primary proxy hostname:port and URL path `/s3` (e.g., `S3_ENDPOINT=10.0.0.20:51080/s3`). Secure HTTP is disabled by default, so `S3_USE_HTTPS` must be `0`.
   470  
   471  Example running a training task:
   472  
   473  ```
   474  S3_ENDPOINT=10.0.0.20:51080/s3 S3_USE_HTTPS=0 python mnist.py
   475  ```
   476  
   477  TensorFlow on AIS training screencast:
   478  
   479  ![TF training in action](images/ais-s3-tf.gif)
   480  
   481  ## S3 Compatibility
   482  
   483  AIStore fully supports [Amazon S3 API](https://docs.aws.amazon.com/s3/index.html) with a few exceptions documented and detailed below. The functionality has been tested using native Amazon S3 clients:
   484  
   485  * [TensorFlow](https://docs.w3cub.com/tensorflow~guide/deploy/s3)
   486  * [s3cmd](https://github.com/s3tools/s3cmd)
   487  * [aws CLI](https://aws.amazon.com/cli)
   488  
   489  Speaking of command-line tools, in addition to its own native [CLI](/docs/cli.md) AIStore also supports Amazon's `s3cmd` and `aws` CLIs. Python-based Amazon S3 clients that will often use Amazon Web Services (AWS) Software Development Kit for Python called [Boto3](https://github.com/boto/boto3) are also supported - see a note below on [AIS <=> Boto3 compatibility](#boto3-compatibility).
   490  
   491  By way of quick summary, Amazon S3 supports the following API categories:
   492  
   493  - Create and delete a bucket
   494  - HEAD bucket
   495  - Get a list of buckets
   496  - PUT, GET, HEAD, and DELETE objects
   497  - Get a list of objects in a bucket (important options include name prefix and page size)
   498  - Copy object within the same bucket or between buckets
   499  - Multi-object deletion
   500  - Get, enable, and disable bucket versioning
   501  
   502  and a few more. The following table summarizes S3 APIs and provides the corresponding AIS (native) CLI, as well as [s3cmd](https://github.com/s3tools/s3cmd) and [aws CLI](https://aws.amazon.com/cli) examples (along with comments on limitations, if any).
   503  
   504  > See also: a note on [AIS <=> Boto3 compatibility](#boto3-compatibility).
   505  
   506  ### Supported S3
   507  
   508  | API | AIS CLI and comments | [s3cmd](https://github.com/s3tools/s3cmd) | [aws CLI](https://aws.amazon.com/cli) |
   509  | --- | --- | --- | --- |
   510  | Create bucket | `ais create ais://bck` (note: consider using S3 default `md5` checksum - see [discussion](#object-checksum) and examples below) | `s3cmd mb` | `aws s3 mb` |
   511  | Head bucket | `ais bucket show ais://bck` | `s3cmd info s3://bck` | `aws s3api head-bucket` |
   512  | Destroy bucket (aka "remove bucket") | `ais bucket rm ais://bck` | `s3cmd rb`, `aws s3 rb` ||
   513  | List buckets | `ais ls ais://` (or, same: `ais ls ais:`) | `s3cmd ls s3://` | `aws s3 ls s3://` |
   514  | PUT object | `ais put filename ais://bck/obj` | `s3cmd put ...` | `aws s3 cp ..` |
   515  | GET object | `ais get ais://bck/obj filename` | `s3cmd get ...` | `aws s3 cp ..` |
   516  | GET object(range) | `ais get ais://bck/obj --offset 0 --length 10` | **Not supported** | `aws s3api get-object --range= ..` |
   517  | HEAD object | `ais object show ais://bck/obj` | `s3cmd info s3://bck/obj` | `aws s3api head-object` |
   518  | List objects in a bucket | `ais ls ais://bck` | `s3cmd ls s3://bucket-name/` | `aws s3 ls s3://bucket-name/` |
   519  | Copy object in a given bucket or between buckets | S3 API is fully supported; we have yet to implement our native CLI to copy objects (we do copy buckets, though) | **Limited support**: `s3cmd` performs GET followed by PUT instead of AWS API call | `aws s3api copy-object ...` calls copy object API |
   520  | Last modification time | AIS always stores only one - the last - version of an object. Therefore, we track creation **and** last access time but not "modification time". | - | - |
   521  | Bucket creation time | `ais bucket show ais://bck` | `s3cmd` displays creation time via `ls` subcommand: `s3cmd ls s3://` | - |
   522  | Versioning | AIS tracks and updates versioning information but only for the **latest** object version. Versioning is enabled by default; to disable, run: `ais bucket props ais://bck versioning.enabled=false` | - | `aws s3api get/put-bucket-versioning` |
   523  | ACL | Limited support; AIS provides an extensive set of configurable permissions - see `ais bucket props ais://bck access` and `ais auth` and the corresponding documentation | - | - |
   524  | Multipart upload(**) | - (added in v3.12) | `s3cmd put ... s3://bck --multipart-chunk-size-mb=5` | `aws s3api create-multipart-upload --bucket abc ...` |
   525  
   526  > (**) With the only exception of [UploadPartCopy](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html) operation.
   527  
   528  ### Unsupported S3
   529  
   530  * Amazon Regions (us-east-1, us-west-1, etc.)
   531  * Retention Policy
   532  * CORS
   533  * Website endpoints
   534  * CloudFront CDN
   535  
   536  
   537  ## Boto3 Compatibility
   538  
   539  Arguably, extremely few HTTP client-side libraries do _not_ follow [HTTP redirects](https://www.rfc-editor.org/rfc/rfc7231#page-54), and Amazon's [botocore](https://github.com/boto/botocore), used by [Boto3](https://github.com/boto/boto3), just happens to be one of those (libraries).
   540  
   541  AIStore provides a shim that you can use to alter `botocore` and `boto3`'s behavior to work as expected with AIStore.
   542  
   543  To use `boto3` or `botocore` as client libraries for AIStore:
   544  
   545   - Install the [aistore python package](/docs/s3cmd.md) with the `botocore` extra.
   546  
   547  ```shell
   548  $ pip install aistore[botocore]
   549  ```
   550  
   551   - Import `aistore.botocore_patch.botocore` in your source code alongside `botocore` and / or `boto3`.
   552  
   553  ```python
   554  import boto3
   555  from aistore.botocore_patch import botocore
   556  ```
   557  
   558  For more context, see perhaps the following `aws-cli` ticket and discussion at:
   559  
   560  * [Support S3 HTTP redirects to non-Amazon URI's](https://github.com/aws/aws-cli/issues/6559)
   561  
   562  ## Amazon CLI tools
   563  
   564  As far as existing Amazon-native CLI tools, `s3cmd` would be the preferred and recommended option. Please see [`s3cmd` readme](/docs/s3cmd.md) for usage examples and a variety of topics, including:
   565  
   566  - [`s3cmd` Configuration](/docs/s3cmd.md#s3cmd-configuration)
   567  - [Getting Started](/docs/s3cmd.md#getting-started)
   568    - [1. AIS Endpoint](/docs/s3cmd.md#1-ais-endpoint)
   569    - [2. How to have `s3cmd` calling AIS endpoint](/docs/s3cmd.md#2-how-to-have-s3cmd-calling-ais-endpoint)
   570    - [3. Alternatively](/docs/s3cmd.md#3-alternatively)
   571    - [4. Note and, possibly, update AIS configuration](/docs/s3cmd.md#4-note-and-possibly-update-ais-configuration)
   572    - [5. Create bucket and PUT/GET objects using `s3cmd`](/docs/s3cmd.md#5-create-bucket-and-putget-objects-using-s3cmd)
   573    - [6. Multipart upload using `s3cmd`](/docs/s3cmd.md#6-multipart-upload-using-s3cmd)
   574