github.com/hanks177/podman/v4@v4.1.3-0.20220613032544-16d90015bc83/docs/tutorials/image_signing.md (about)

     1  # How to sign and distribute container images using Podman
     2  
     3  Signing container images originates from the motivation of trusting only
     4  dedicated image providers to mitigate man-in-the-middle (MITM) attacks or
     5  attacks on container registries. One way to sign images is to utilize a GNU
     6  Privacy Guard ([GPG][0]) key. This technique is generally compatible with any
     7  OCI compliant container registry like [Quay.io][1]. It is worth mentioning that
     8  the OpenShift integrated container registry supports this signing mechanism out
     9  of the box, which makes separate signature storage unnecessary.
    10  
    11  [0]: https://gnupg.org
    12  [1]: https://quay.io
    13  
    14  From a technical perspective, we can utilize Podman to sign the image before
    15  pushing it into a remote registry. After that, all systems running Podman have
    16  to be configured to retrieve the signatures from a remote server, which can
    17  be any simple web server. This means that every unsigned image will be rejected
    18  during an image pull operation. But how does this work?
    19  
    20  First of all, we have to create a GPG key pair or select an already locally
    21  available one. To generate a new GPG key, just run `gpg --full-gen-key` and
    22  follow the interactive dialog. Now we should be able to verify that the key
    23  exists locally:
    24  
    25  ```bash
    26  > gpg --list-keys sgrunert@suse.com
    27  pub   rsa2048 2018-11-26 [SC] [expires: 2020-11-25]
    28        XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
    29  uid           [ultimate] Sascha Grunert <sgrunert@suse.com>
    30  sub   rsa2048 2018-11-26 [E] [expires: 2020-11-25]
    31  ```
    32  
    33  Now let’s assume that we run a container registry. For example we could simply
    34  start one on our local machine:
    35  
    36  ```bash
    37  sudo podman run -d -p 5000:5000 docker.io/registry
    38  ```
    39  
    40  The registry does not know anything about image signing, it just provides the remote
    41  storage for the container images. This means if we want to sign an image, we
    42  have to take care of how to distribute the signatures.
    43  
    44  Let’s choose a standard `alpine` image for our signing experiment:
    45  
    46  ```bash
    47  sudo podman pull docker://docker.io/alpine:latest
    48  ```
    49  
    50  ```bash
    51  sudo podman images alpine
    52  REPOSITORY                 TAG      IMAGE ID       CREATED       SIZE
    53  docker.io/library/alpine   latest   e7d92cdc71fe   6 weeks ago   5.86 MB
    54  ```
    55  
    56  Now we can re-tag the image to point it to our local registry:
    57  
    58  ```bash
    59  sudo podman tag alpine localhost:5000/alpine
    60  ```
    61  
    62  ```bash
    63  sudo podman images alpine
    64  REPOSITORY                 TAG      IMAGE ID       CREATED       SIZE
    65  localhost:5000/alpine      latest   e7d92cdc71fe   6 weeks ago   5.86 MB
    66  docker.io/library/alpine   latest   e7d92cdc71fe   6 weeks ago   5.86 MB
    67  ```
    68  
    69  Podman would now be able to push the image and sign it in one command. But to
    70  let this work, we have to modify our system-wide registries configuration at
    71  `/etc/containers/registries.d/default.yaml`:
    72  
    73  ```yaml
    74  default-docker:
    75    sigstore: http://localhost:8000 # Added by us
    76    sigstore-staging: file:///var/lib/containers/sigstore
    77  ```
    78  
    79  We can see that we have two signature stores configured:
    80  
    81  - `sigstore`: referencing a web server for signature reading
    82  - `sigstore-staging`: referencing a file path for signature writing
    83  
    84  Now, let’s push and sign the image:
    85  
    86  ```bash
    87  sudo -E GNUPGHOME=$HOME/.gnupg \
    88      podman push \
    89      --tls-verify=false \
    90      --sign-by sgrunert@suse.com \
    91      localhost:5000/alpine
    92  …
    93  Storing signatures
    94  ```
    95  
    96  If we now take a look at the systems signature storage, then we see that there
    97  is a new signature available, which was caused by the image push:
    98  
    99  ```bash
   100  sudo ls /var/lib/containers/sigstore
   101  'alpine@sha256=e9b65ef660a3ff91d28cc50eba84f21798a6c5c39b4dd165047db49e84ae1fb9'
   102  ```
   103  
   104  The default signature store in our edited version of
   105  `/etc/containers/registries.d/default.yaml` references a web server listening at
   106  `http://localhost:8000`. For our experiment, we simply start a new server inside
   107  the local staging signature store:
   108  
   109  ```bash
   110  sudo bash -c 'cd /var/lib/containers/sigstore && python3 -m http.server'
   111  Serving HTTP on 0.0.0.0 port 8000 (http://0.0.0.0:8000/) ...
   112  ```
   113  
   114  Let’s remove the local images for our verification test:
   115  
   116  ```
   117  sudo podman rmi docker.io/alpine localhost:5000/alpine
   118  ```
   119  
   120  We have to write a policy to enforce that the signature has to be valid. This
   121  can be done by adding a new rule in `/etc/containers/policy.json`. From the
   122  below example, copy the `"docker"` entry into the `"transports"` section of your
   123  `policy.json`.
   124  
   125  ```json
   126  {
   127    "default": [{ "type": "insecureAcceptAnything" }],
   128    "transports": {
   129      "docker": {
   130        "localhost:5000": [
   131          {
   132            "type": "signedBy",
   133            "keyType": "GPGKeys",
   134            "keyPath": "/tmp/key.gpg"
   135          }
   136        ]
   137      }
   138    }
   139  }
   140  ```
   141  
   142  The `keyPath` does not exist yet, so we have to put the GPG key there:
   143  
   144  ```bash
   145  gpg --output /tmp/key.gpg --armor --export sgrunert@suse.com
   146  ```
   147  
   148  If we now pull the image:
   149  
   150  ```bash
   151  sudo podman pull --tls-verify=false localhost:5000/alpine
   152  …
   153  Storing signatures
   154  e7d92cdc71feacf90708cb59182d0df1b911f8ae022d29e8e95d75ca6a99776a
   155  ```
   156  
   157  Then we can see in the logs of the web server that the signature has been
   158  accessed:
   159  
   160  ```
   161  127.0.0.1 - - [04/Mar/2020 11:18:21] "GET /alpine@sha256=e9b65ef660a3ff91d28cc50eba84f21798a6c5c39b4dd165047db49e84ae1fb9/signature-1 HTTP/1.1" 200 -
   162  ```
   163  
   164  As an counterpart example, if we specify the wrong key at `/tmp/key.gpg`:
   165  
   166  ```bash
   167  gpg --output /tmp/key.gpg --armor --export mail@saschagrunert.de
   168  File '/tmp/key.gpg' exists. Overwrite? (y/N) y
   169  ```
   170  
   171  Then a pull is not possible any more:
   172  
   173  ```bash
   174  sudo podman pull --tls-verify=false localhost:5000/alpine
   175  Trying to pull localhost:5000/alpine...
   176  Error: error pulling image "localhost:5000/alpine": unable to pull localhost:5000/alpine: unable to pull image: Source image rejected: Invalid GPG signature: …
   177  ```
   178  
   179  So in general there are four main things to be taken into consideration when
   180  signing container images with Podman and GPG:
   181  
   182  1. We need a valid private GPG key on the signing machine and corresponding
   183     public keys on every system which would pull the image
   184  2. A web server has to run somewhere which has access to the signature storage
   185  3. The web server has to be configured in any
   186     `/etc/containers/registries.d/*.yaml` file
   187  4. Every image pulling system has to be configured to contain the enforcing
   188     policy configuration via `policy.conf`
   189  
   190  That’s it for image signing and GPG. The cool thing is that this setup works out
   191  of the box with [CRI-O][2] as well and can be used to sign container images in
   192  Kubernetes environments.
   193  
   194  [2]: https://cri-o.io