github.com/hanks177/podman/v4@v4.1.3-0.20220613032544-16d90015bc83/troubleshooting.md (about)

     1  ![PODMAN logo](logo/podman-logo-source.svg)
     2  
     3  # Troubleshooting
     4  
     5  ## A list of common issues and solutions for Podman
     6  
     7  ---
     8  ### 1) Variety of issues - Validate Version
     9  
    10  A large number of issues reported against Podman are often found to already be fixed
    11  in more current versions of the project.  Before reporting an issue, please verify the
    12  version you are running with `podman version` and compare it to the latest release
    13  documented on the top of Podman's [README.md](README.md).
    14  
    15  If they differ, please update your version of PODMAN to the latest possible
    16  and retry your command before reporting the issue.
    17  
    18  ---
    19  ### 2) Can't use volume mount, get permission denied
    20  
    21  ```console
    22  $ podman run -v ~/mycontent:/content fedora touch /content/file
    23  touch: cannot touch '/content/file': Permission denied
    24  ```
    25  
    26  #### Solution
    27  
    28  This is sometimes caused by SELinux, and sometimes by user namespaces.
    29  
    30  Labeling systems like SELinux require that proper labels are placed on volume
    31  content mounted into a container. Without a label, the security system might
    32  prevent the processes running inside the container from using the content. By
    33  default, Podman does not change the labels set by the OS.
    34  
    35  To change a label in the container context, you can add either of two suffixes
    36  **:z** or **:Z** to the volume mount. These suffixes tell Podman to relabel file
    37  objects on the shared volumes. The **z** option tells Podman that two containers
    38  share the volume content. As a result, Podman labels the content with a shared
    39  content label. Shared volume labels allow all containers to read/write content.
    40  The **Z** option tells Podman to label the content with a private unshared label.
    41  Only the current container can use a private volume.
    42  
    43  ```console
    44  $ podman run -v ~/mycontent:/content:Z fedora touch /content/file
    45  ```
    46  
    47  Make sure the content is private for the container.  Do not relabel system directories and content.
    48  Relabeling system content might cause other confined services on your machine to fail.  For these
    49  types of containers we recommend having SELinux separation disabled.  The option `--security-opt label=disable`
    50  will disable SELinux separation for the container.
    51  
    52  ```console
    53  $ podman run --security-opt label=disable -v ~:/home/user fedora touch /home/user/file
    54  ```
    55  
    56  In cases where the container image runs as a specific, non-root user, though, the
    57  solution is to fix the user namespace.  This would include container images such as
    58  the Jupyter Notebook image (which runs as "jovyan") and the Postgres image (which runs
    59  as "postgres").  In either case, use the `--userns` switch to map user namespaces,
    60  most of the time by using the **keep-id** option.
    61  
    62  ```console
    63  $ podman run -v "$PWD":/home/jovyan/work --userns=keep-id jupyter/scipy-notebook
    64  ```
    65  
    66  ---
    67  ### 3) No such image or Bare keys cannot contain ':'
    68  
    69  When doing a `podman pull` or `podman build` command and a "common" image cannot be pulled,
    70  it is likely that the `/etc/containers/registries.conf` file is either not installed or possibly
    71  misconfigured.
    72  
    73  #### Symptom
    74  
    75  ```console
    76  $ sudo podman build -f Dockerfile
    77  STEP 1: FROM alpine
    78  error building: error creating build container: no such image "alpine" in registry: image not known
    79  ```
    80  
    81  or
    82  
    83  ```console
    84  $ sudo podman pull fedora
    85  error pulling image "fedora": unable to pull fedora: error getting default registries to try: Near line 9 (last key parsed ''): Bare keys cannot contain ':'.
    86  ```
    87  
    88  #### Solution
    89  
    90    * Verify that the `/etc/containers/registries.conf` file exists.  If not, verify that the containers-common package is installed.
    91    * Verify that the entries in the `unqualified-search-registries` list of the `/etc/containers/registries.conf` file are valid and reachable.
    92      * i.e. `unqualified-search-registries = ["registry.fedoraproject.org", "quay.io", "registry.access.redhat.com"]`
    93  
    94  ---
    95  ### 4) http: server gave HTTP response to HTTPS client
    96  
    97  When doing a Podman command such as `build`, `commit`, `pull`, or `push` to a registry,
    98  TLS verification is turned on by default.  If encryption is not used with
    99  those commands, this error can occur.
   100  
   101  #### Symptom
   102  
   103  ```console
   104  $ sudo podman push alpine docker://localhost:5000/myalpine:latest
   105  Getting image source signatures
   106  Get https://localhost:5000/v2/: http: server gave HTTP response to HTTPS client
   107  ```
   108  
   109  #### Solution
   110  
   111  By default TLS verification is turned on when communicating to registries from
   112  Podman.  If the registry does not require encryption the Podman commands
   113  such as `build`, `commit`, `pull` and `push` will fail unless TLS verification is turned
   114  off using the `--tls-verify` option.  **NOTE:** It is not at all recommended to
   115  communicate with a registry and not use TLS verification.
   116  
   117    * Turn off TLS verification by passing false to the tls-verify option.
   118    * I.e. `podman push --tls-verify=false alpine docker://localhost:5000/myalpine:latest`
   119  
   120  ---
   121  ### 5) rootless containers cannot ping hosts
   122  
   123  When using the ping command from a non-root container, the command may
   124  fail because of a lack of privileges.
   125  
   126  #### Symptom
   127  
   128  ```console
   129  $ podman run --rm fedora ping -W10 -c1 redhat.com
   130  PING redhat.com (209.132.183.105): 56 data bytes
   131  
   132  --- redhat.com ping statistics ---
   133  1 packets transmitted, 0 packets received, 100% packet loss
   134  ```
   135  
   136  #### Solution
   137  
   138  It is most likely necessary to enable unprivileged pings on the host.
   139  Be sure the UID of the user is part of the range in the
   140  `/proc/sys/net/ipv4/ping_group_range` file.
   141  
   142  To change its value you can use something like:
   143  
   144  ```console
   145  # sysctl -w "net.ipv4.ping_group_range=0 2000000"
   146  ```
   147  
   148  To make the change persistent, you'll need to add a file in
   149  `/etc/sysctl.d` that contains `net.ipv4.ping_group_range=0 $MAX_UID`.
   150  
   151  ---
   152  ### 6) Build hangs when the Dockerfile contains the useradd command
   153  
   154  When the Dockerfile contains a command like `RUN useradd -u 99999000 -g users newuser` the build can hang.
   155  
   156  #### Symptom
   157  
   158  If you are using a useradd command within a Dockerfile with a large UID/GID, it will create a large sparse file `/var/log/lastlog`.  This can cause the build to hang forever.  Go language does not support sparse files correctly, which can lead to some huge files being created in your container image.
   159  
   160  #### Solution
   161  
   162  If the entry in the Dockerfile looked like: RUN useradd -u 99999000 -g users newuser then add the `--no-log-init` parameter to change it to: `RUN useradd --no-log-init -u 99999000 -g users newuser`. This option tells useradd to stop creating the lastlog file.
   163  
   164  ### 7) Permission denied when running Podman commands
   165  
   166  When rootless Podman attempts to execute a container on a non exec home directory a permission error will be raised.
   167  
   168  #### Symptom
   169  
   170  If you are running Podman or Buildah on a home directory that is mounted noexec,
   171  then they will fail with a message like:
   172  
   173  ```console
   174  $ podman run centos:7
   175  standard_init_linux.go:203: exec user process caused "permission denied"
   176  ```
   177  
   178  #### Solution
   179  
   180  Since the administrator of the system set up your home directory to be noexec, you will not be allowed to execute containers from storage in your home directory. It is possible to work around this by manually specifying a container storage path that is not on a noexec mount. Simply copy the file /etc/containers/storage.conf to ~/.config/containers/ (creating the directory if necessary). Specify a graphroot directory which is not on a noexec mount point and to which you have read/write privileges.  You will need to modify other fields to writable directories as well.
   181  
   182  For example
   183  
   184  ```console
   185  $ cat ~/.config/containers/storage.conf
   186  [storage]
   187    driver = "overlay"
   188    runroot = "/run/user/1000"
   189    graphroot = "/execdir/myuser/storage"
   190    [storage.options]
   191      mount_program = "/bin/fuse-overlayfs"
   192  ```
   193  
   194  ### 8) Permission denied when running systemd within a Podman container
   195  
   196  When running systemd as PID 1 inside of a container on an SELinux
   197  separated machine, it needs to write to the cgroup file system.
   198  
   199  #### Symptom
   200  
   201  Systemd gets permission denied when attempting to write to the cgroup file
   202  system, and AVC messages start to show up in the audit.log file or journal on
   203  the system.
   204  
   205  #### Solution
   206  
   207  Newer versions of Podman (2.0 or greater) support running init based containers
   208  with a different SELinux labels, which allow the container process access to the
   209  cgroup file system. This feature requires container-selinux-2.132 or newer
   210  versions.
   211  
   212  Prior to Podman 2.0, the SELinux boolean `container_manage_cgroup` allows
   213  container processes to write to the cgroup file system. Turn on this boolean,
   214  on SELinux separated systems, to allow systemd to run properly in the container.
   215  Only do this on systems running older versions of Podman.
   216  
   217  ```console
   218  # setsebool -P container_manage_cgroup true
   219  ```
   220  
   221  ### 9) Newuidmap missing when running rootless Podman commands
   222  
   223  Rootless Podman requires the newuidmap and newgidmap programs to be installed.
   224  
   225  #### Symptom
   226  
   227  If you are running Podman or Buildah as a rootless user, you get an error complaining about
   228  a missing newuidmap executable.
   229  
   230  ```console
   231  $ podman run -ti fedora sh
   232  command required for rootless mode with multiple IDs: exec: "newuidmap": executable file not found in $PATH
   233  ```
   234  
   235  #### Solution
   236  
   237  Install a version of shadow-utils that includes these executables.  Note that for RHEL and CentOS 7, at least the 7.7 release must be installed for support to be available.
   238  
   239  ### 10) rootless setup user: invalid argument
   240  
   241  Rootless Podman requires the user running it to have a range of UIDs listed in /etc/subuid and /etc/subgid.
   242  
   243  #### Symptom
   244  
   245  A user, either via --user or through the default configured for the image, is not mapped inside the namespace.
   246  
   247  ```console
   248  $ podman run --rm -ti --user 1000000 alpine echo hi
   249  Error: container create failed: container_linux.go:344: starting container process caused "setup user: invalid argument"
   250  ```
   251  
   252  #### Solution
   253  
   254  Update the /etc/subuid and /etc/subgid with fields for users that look like:
   255  
   256  ```console
   257  $ cat /etc/subuid
   258  johndoe:100000:65536
   259  test:165536:65536
   260  ```
   261  
   262  The format of this file is `USERNAME:UID:RANGE`
   263  
   264  * username as listed in `/etc/passwd` or `getpwent`.
   265  * The initial uid allocated for the user.
   266  * The size of the range of UIDs allocated for the user.
   267  
   268  This means johndoe is allocated UIDs 100000-165535 as well as his standard UID in the
   269  `/etc/passwd` file.
   270  
   271  You should ensure that each user has a unique range of UIDs, because overlapping UIDs,
   272  would potentially allow one user to attack another user. In addition, make sure
   273  that the range of UIDs you allocate can cover all UIDs that the container
   274  requires. For example, if the container has a user with UID 10000, ensure you
   275  have at least 10001 subuids, and if the container needs to be run as a user with
   276  UID 1000000, ensure you have at least 1000001 subuids.
   277  
   278  You could also use the `usermod` program to assign UIDs to a user.
   279  
   280  If you update either the `/etc/subuid` or `/etc/subgid` file, you need to
   281  stop all running containers and kill the pause process.  This is done
   282  automatically by the `system migrate` command, which can also be used
   283  to stop all the containers and kill the pause process.
   284  
   285  ```console
   286  # usermod --add-subuids 200000-201000 --add-subgids 200000-201000 johndoe
   287  # grep johndoe /etc/subuid /etc/subgid
   288  /etc/subuid:johndoe:200000:1001
   289  /etc/subgid:johndoe:200000:1001
   290  ```
   291  
   292  ### 11) Changing the location of the Graphroot leads to permission denied
   293  
   294  When I change the graphroot storage location in storage.conf, the next time I
   295  run Podman, I get an error like:
   296  
   297  ```console
   298  # podman run -p 5000:5000 -it centos bash
   299  
   300  bash: error while loading shared libraries: /lib64/libc.so.6: cannot apply additional memory protection after relocation: Permission denied
   301  ```
   302  
   303  For example, the admin sets up a spare disk to be mounted at `/src/containers`,
   304  and points storage.conf at this directory.
   305  
   306  
   307  #### Symptom
   308  
   309  SELinux blocks containers from using arbitrary locations for overlay storage.
   310  These directories need to be labeled with the same labels as if the content was
   311  under `/var/lib/containers/storage`.
   312  
   313  #### Solution
   314  
   315  Tell SELinux about the new containers storage by setting up an equivalence record.
   316  This tells SELinux to label content under the new path, as if it was stored
   317  under `/var/lib/containers/storage`.
   318  
   319  ```console
   320  # semanage fcontext -a -e /var/lib/containers /srv/containers
   321  # restorecon -R -v /srv/containers
   322  ```
   323  
   324  The semanage command above tells SELinux to setup the default labeling of
   325  `/srv/containers` to match `/var/lib/containers`.  The `restorecon` command
   326  tells SELinux to apply the labels to the actual content.
   327  
   328  Now all new content created in these directories will automatically be created
   329  with the correct label.
   330  
   331  ### 12) Anonymous image pull fails with 'invalid username/password'
   332  
   333  Pulling an anonymous image that doesn't require authentication can result in an
   334  `invalid username/password` error.
   335  
   336  #### Symptom
   337  
   338  If you pull an anonymous image, one that should not require credentials, you can receive
   339  an `invalid username/password` error if you have credentials established in the
   340  authentication file for the target container registry that are no longer valid.
   341  
   342  ```console
   343  $ podman run -it --rm docker://docker.io/library/alpine:latest ls
   344  Trying to pull docker://docker.io/library/alpine:latest...ERRO[0000] Error pulling image ref //alpine:latest: Error determining manifest MIME type for docker://alpine:latest: unable to retrieve auth token: invalid username/password
   345  Failed
   346  Error: unable to pull docker://docker.io/library/alpine:latest: unable to pull image: Error determining manifest MIME type for docker://alpine:latest: unable to retrieve auth token: invalid username/password
   347  ```
   348  
   349  This can happen if the authentication file is modified 'by hand' or if the credentials
   350  are established locally and then the password is updated later in the container registry.
   351  
   352  #### Solution
   353  
   354  Depending upon which container tool was used to establish the credentials, use `podman logout`
   355  or `docker logout` to remove the credentials from the authentication file.
   356  
   357  ### 13) Running Podman inside a container causes container crashes and inconsistent states
   358  
   359  Running Podman in a container and forwarding some, but not all, of the required host directories can cause inconsistent container behavior.
   360  
   361  #### Symptom
   362  
   363  After creating a container with Podman's storage directories mounted in from the host and running Podman inside a container, all containers show their state as "configured" or "created", even if they were running or stopped.
   364  
   365  #### Solution
   366  
   367  When running Podman inside a container, it is recommended to mount at a minimum `/var/lib/containers/storage/` as a volume.
   368  Typically, you will not mount in the host version of the directory, but if you wish to share containers with the host, you can do so.
   369  If you do mount in the host's `/var/lib/containers/storage`, however, you must also mount in the host's `/run/libpod` and `/run/containers/storage` directories.
   370  Not doing this will cause Podman in the container to detect that temporary files have been cleared, leading it to assume a system restart has taken place.
   371  This can cause Podman to reset container states and lose track of running containers.
   372  
   373  For running containers on the host from inside a container, we also recommend the [Podman remote client](docs/tutorials/remote_client.md), which only requires a single socket to be mounted into the container.
   374  
   375  ### 14) Rootless 'podman build' fails EPERM on NFS:
   376  
   377  NFS enforces file creation on different UIDs on the server side and does not understand user namespace, which rootless Podman requires.
   378  When a container root process like YUM attempts to create a file owned by a different UID, NFS Server denies the creation.
   379  NFS is also a problem for the file locks when the storage is on it.  Other distributed file systems (for example: Lustre, Spectrum Scale, the General Parallel File System (GPFS)) are also not supported when running in rootless mode as these file systems do not understand user namespace.
   380  
   381  #### Symptom
   382  ```console
   383  $ podman build .
   384  ERRO[0014] Error while applying layer: ApplyLayer exit status 1 stdout:  stderr: open /root/.bash_logout: permission denied
   385  error creating build container: Error committing the finished image: error adding layer with blob "sha256:a02a4930cb5d36f3290eb84f4bfa30668ef2e9fe3a1fb73ec015fc58b9958b17": ApplyLayer exit status 1 stdout:  stderr: open /root/.bash_logout: permission denied
   386  ```
   387  
   388  #### Solution
   389  Choose one of the following:
   390    * Setup containers/storage in a different directory, not on an NFS share.
   391      * Create a directory on a local file system.
   392      * Edit `~/.config/containers/containers.conf` and point the `volume_path` option to that local directory. (Copy `/usr/share/containers/containers.conf` if `~/.config/containers/containers.conf` does not exist)
   393    * Otherwise just run Podman as root, via `sudo podman`
   394  
   395  ### 15) Rootless 'podman build' fails when using OverlayFS:
   396  
   397  The Overlay file system (OverlayFS) requires the ability to call the `mknod` command when creating whiteout files
   398  when extracting an image.  However, a rootless user does not have the privileges to use `mknod` in this capacity.
   399  
   400  #### Symptom
   401  ```console
   402  $ podman build --storage-driver overlay .
   403  STEP 1: FROM docker.io/ubuntu:xenial
   404  Getting image source signatures
   405  Copying blob edf72af6d627 done
   406  Copying blob 3e4f86211d23 done
   407  Copying blob 8d3eac894db4 done
   408  Copying blob f7277927d38a done
   409  Copying config 5e13f8dd4c done
   410  Writing manifest to image destination
   411  Storing signatures
   412  Error: error creating build container: Error committing the finished image: error adding layer with blob "sha256:8d3eac894db4dc4154377ad28643dfe6625ff0e54bcfa63e0d04921f1a8ef7f8": Error processing tar file(exit status 1): operation not permitted
   413  $ podman build .
   414  ERRO[0014] Error while applying layer: ApplyLayer exit status 1 stdout:  stderr: open /root/.bash_logout: permission denied
   415  error creating build container: Error committing the finished image: error adding layer with blob "sha256:a02a4930cb5d36f3290eb84f4bfa30668ef2e9fe3a1fb73ec015fc58b9958b17": ApplyLayer exit status 1 stdout:  stderr: open /root/.bash_logout: permission denied
   416  ```
   417  
   418  #### Solution
   419  Choose one of the following:
   420    * Complete the build operation as a privileged user.
   421    * Install and configure fuse-overlayfs.
   422      * Install the fuse-overlayfs package for your Linux Distribution.
   423      * Add `mount_program = "/usr/bin/fuse-overlayfs"` under `[storage.options]` in your `~/.config/containers/storage.conf` file.
   424  
   425  ### 16) RHEL 7 and CentOS 7 based `init` images don't work with cgroup v2
   426  
   427  The systemd version shipped in RHEL 7 and CentOS 7 doesn't have support for cgroup v2.  Support for cgroup v2 requires version 230 of systemd or newer, which
   428  was never shipped or supported on RHEL 7 or CentOS 7.
   429  
   430  #### Symptom
   431  ```console
   432  # podman run --name test -d registry.access.redhat.com/rhel7-init:latest && sleep 10 && podman exec test systemctl status
   433  c8567461948439bce72fad3076a91ececfb7b14d469bfa5fbc32c6403185beff
   434  Failed to get D-Bus connection: Operation not permitted
   435  Error: non zero exit code: 1: OCI runtime error
   436  ```
   437  
   438  #### Solution
   439  You'll need to either:
   440  
   441  * configure the host to use cgroup v1. On Fedora you can do:
   442  
   443  ```console
   444  # dnf install -y grubby
   445  # grubby --update-kernel=ALL --args=”systemd.unified_cgroup_hierarchy=0"
   446  # reboot
   447  ```
   448  
   449  * update the image to use an updated version of systemd.
   450  
   451  ### 17) rootless containers exit once the user session exits
   452  
   453  You need to set lingering mode through loginctl to prevent user processes to be killed once
   454  the user session completed.
   455  
   456  #### Symptom
   457  
   458  Once the user logs out all the containers exit.
   459  
   460  #### Solution
   461  You'll need to either:
   462  
   463  ```console
   464  # loginctl enable-linger $UID
   465  ```
   466  
   467  ### 18) `podman run` fails with "bpf create: permission denied error"
   468  
   469  The Kernel Lockdown patches deny eBPF programs when Secure Boot is enabled in the BIOS. [Matthew Garrett's post](https://mjg59.dreamwidth.org/50577.html) describes the relationship between Lockdown and Secure Boot and [Jan-Philip Gehrcke's](https://gehrcke.de/2019/09/running-an-ebpf-program-may-require-lifting-the-kernel-lockdown/) connects this with eBPF. [RH bug 1768125](https://bugzilla.redhat.com/show_bug.cgi?id=1768125) contains some additional details.
   470  
   471  #### Symptom
   472  
   473  Attempts to run podman result in
   474  
   475  ```Error: bpf create : Operation not permitted: OCI runtime permission denied error```
   476  
   477  #### Solution
   478  
   479  One workaround is to disable Secure Boot in your BIOS.
   480  
   481  ### 19) error creating libpod runtime: there might not be enough IDs available in the namespace
   482  
   483  Unable to pull images
   484  
   485  #### Symptom
   486  
   487  ```console
   488  $ podman unshare cat /proc/self/uid_map
   489  	 0       1000          1
   490  ```
   491  
   492  #### Solution
   493  
   494  ```console
   495  $ podman system migrate
   496  ```
   497  
   498  Original command now returns
   499  
   500  ```console
   501  $ podman unshare cat /proc/self/uid_map
   502  	 0       1000          1
   503  	 1     100000      65536
   504  ```
   505  
   506  Reference [subuid](https://man7.org/linux/man-pages/man5/subuid.5.html) and [subgid](https://man7.org/linux/man-pages/man5/subgid.5.html) man pages for more detail.
   507  
   508  ### 20) Passed-in devices or files can't be accessed in rootless container
   509  
   510  As a non-root user you have group access rights to a device or files that you
   511  want to pass into a rootless container with `--device=...` or `--volume=...`
   512  
   513  #### Symptom
   514  
   515  Any access inside the container is rejected with "Permission denied".
   516  
   517  #### Solution
   518  
   519  The runtime uses `setgroups(2)` hence the process loses all additional groups
   520  the non-root user has. Use the `--group-add keep-groups` flag to pass the
   521  user's supplementary group access into the container. Currently only available
   522  with the `crun` OCI runtime.
   523  
   524  ### 21) A rootless container running in detached mode is closed at logout
   525  <!-- This is the same as section 17 above and should be deleted -->
   526  
   527  When running a container with a command like `podman run --detach httpd` as
   528  a rootless user, the container is closed upon logout and is not kept running.
   529  
   530  #### Symptom
   531  
   532  When logging out of a rootless user session, all containers that were started
   533  in detached mode are stopped and are not kept running.  As the root user, these
   534  same containers would survive the logout and continue running.
   535  
   536  #### Solution
   537  
   538  When systemd notes that a session that started a Podman container has exited,
   539  it will also stop any containers that has been associated with it.  To avoid
   540  this, use the following command before logging out: `loginctl enable-linger`.
   541  To later revert the linger functionality, use `loginctl disable-linger`.
   542  
   543  LOGINCTL(1), SYSTEMD(1)
   544  
   545  ### 22) Containers default detach keys conflict with shell history navigation
   546  
   547  Podman defaults to `ctrl-p,ctrl-q` to detach from a running containers. The
   548  bash and zsh shells default to `ctrl-p` for the displaying of the previous
   549  command.  This causes issues when running a shell inside of a container.
   550  
   551  #### Symptom
   552  
   553  With the default detach key combo ctrl-p,ctrl-q, shell history navigation
   554  (tested in bash and zsh) using ctrl-p to access the previous command will not
   555  display this previous command, or anything else.  Conmon is waiting for an
   556  additional character to see if the user wants to detach from the container.
   557  Adding additional characters to the command will cause it to be displayed along
   558  with the additional character. If the user types ctrl-p a second time the shell
   559  display the 2nd to last command.
   560  
   561  #### Solution
   562  
   563  The solution to this is to change the default detach_keys. For example in order
   564  to change the defaults to `ctrl-q,ctrl-q` use the `--detach-keys` option.
   565  
   566  ```console
   567  $ podman run -ti --detach-keys ctrl-q,ctrl-q fedora sh
   568  ```
   569  
   570  To make this change the default for all containers, users can modify the
   571  containers.conf file. This can be done simply in your home directory, but adding the
   572  following lines to users containers.conf
   573  
   574  ```console
   575  $ cat >> ~/.config/containers/containers.conf << _eof
   576  [engine]
   577  detach_keys="ctrl-q,ctrl-q"
   578  _eof
   579  ```
   580  
   581  In order to effect root running containers and all users, modify the system
   582  wide defaults in `/etc/containers/containers.conf`.
   583  
   584  
   585  ### 23) Container with exposed ports won't run in a pod
   586  
   587  A container with ports that have been published with the `--publish` or `-p` option
   588  can not be run within a pod.
   589  
   590  #### Symptom
   591  
   592  ```console
   593  $ podman pod create --name srcview -p 127.0.0.1:3434:3434 -p 127.0.0.1:7080:7080 -p 127.0.0.1:3370:3370                        4b2f4611fa2cbd60b3899b936368c2b3f4f0f68bc8e6593416e0ab8ecb0a3f1d
   594  
   595  $ podman run --pod srcview --name src-expose -p 3434:3434 -v "${PWD}:/var/opt/localrepo":Z,ro sourcegraph/src-expose:latest serve /var/opt/localrepo
   596  Error: cannot set port bindings on an existing container network namespace
   597  ```
   598  
   599  #### Solution
   600  
   601  This is a known limitation.  If a container will be run within a pod, it is not necessary
   602  to publish the port for the containers in the pod. The port must only be published by the
   603  pod itself.  Pod network stacks act like the network stack on the host - you have a
   604  variety of containers in the pod, and programs in the container, all sharing a single
   605  interface and IP address, and associated ports. If one container binds to a port, no other
   606  container can use that port within the pod while it is in use. Containers in the pod can
   607  also communicate over localhost by having one container bind to localhost in the pod, and
   608  another connect to that port.
   609  
   610  In the example from the symptom section, dropping the `-p 3434:3434` would allow the
   611  `podman run` command to complete, and the container as part of the pod would still have
   612  access to that port.  For example:
   613  
   614  ```console
   615  $ podman run --pod srcview --name src-expose -v "${PWD}:/var/opt/localrepo":Z,ro sourcegraph/src-expose:latest serve /var/opt/localrepo
   616  ```
   617  
   618  ### 24) Podman container images fail with `fuse: device not found` when run
   619  
   620  Some container images require that the fuse kernel module is loaded in the kernel
   621  before they will run with the fuse filesystem in play.
   622  
   623  #### Symptom
   624  
   625  When trying to run the container images found at quay.io/podman, quay.io/containers
   626  registry.access.redhat.com/ubi8 or other locations, an error will sometimes be returned:
   627  
   628  <!-- this would be better if it showed the command being run, and use ```console markup -->
   629  ```
   630  ERRO error unmounting /var/lib/containers/storage/overlay/30c058cdadc888177361dd14a7ed7edab441c58525b341df321f07bc11440e68/merged: invalid argument
   631  error mounting container "1ae176ca72b3da7c70af31db7434bcf6f94b07dbc0328bc7e4e8fc9579d0dc2e": error mounting build container "1ae176ca72b3da7c70af31db7434bcf6f94b07dbc0328bc7e4e8fc9579d0dc2e": error creating overlay mount to /var/lib/containers/storage/overlay/30c058cdadc888177361dd14a7ed7edab441c58525b341df321f07bc11440e68/merged: using mount program /usr/bin/fuse-overlayfs: fuse: device not found, try 'modprobe fuse' first
   632  fuse-overlayfs: cannot mount: No such device
   633  : exit status 1
   634  ERRO exit status 1
   635  ```
   636  
   637  #### Solution
   638  
   639  If you encounter a `fuse: device not found` error when running the container image, it is likely that
   640  the fuse kernel module has not been loaded on your host system.  Use the command `modprobe fuse` to load the
   641  module and then run the container image afterwards.  To enable this automatically at boot time, you can add a configuration
   642  file to `/etc/modules.load.d`.  See `man modules-load.d` for more details.
   643  
   644  ### 25) podman run --rootfs link/to//read/only/dir does not work
   645  
   646  An error such as "OCI runtime error" on a read-only filesystem or the error "{image} is not an absolute path or is a symlink" are often times indicators for this issue.  For more details, review this [issue](
   647  https://github.com/containers/podman/issues/5895).
   648  
   649  #### Symptom
   650  
   651  Rootless Podman requires certain files to exist in a file system in order to run.
   652  Podman will create /etc/resolv.conf, /etc/hosts and other file descriptors on the rootfs in order
   653  to mount volumes on them.
   654  
   655  #### Solution
   656  
   657  Run the container once in read/write mode, Podman will generate all of the FDs on the rootfs, and
   658  from that point forward you can run with a read-only rootfs.
   659  
   660  ```console
   661  $ podman run --rm --rootfs /path/to/rootfs true
   662  ```
   663  
   664  The command above will create all the missing directories needed to run the container.
   665  
   666  After that, it can be used in read only mode, by multiple containers at the same time:
   667  
   668  ```console
   669  $ podman run --read-only --rootfs /path/to/rootfs ....
   670  ```
   671  
   672  Another option is to use an Overlay Rootfs Mount:
   673  
   674  ```console
   675  $ podman run --rootfs /path/to/rootfs:O ....
   676  ```
   677  
   678  Modifications to the mount point are destroyed when the container
   679  finishes executing, similar to a tmpfs mount point being unmounted.
   680  
   681  ### 26) Running containers with CPU limits fails with a permissions error
   682  
   683  On some systemd-based systems, non-root users do not have CPU limit delegation
   684  permissions. This causes setting CPU limits to fail.
   685  
   686  #### Symptom
   687  
   688  Running a container with a CPU limit options such as `--cpus`, `--cpu-period`,
   689  or `--cpu-quota` will fail with an error similar to the following:
   690  
   691      Error: opening file `cpu.max` for writing: Permission denied: OCI runtime permission denied error
   692  
   693  This means that CPU limit delegation is not enabled for the current user.
   694  
   695  #### Solution
   696  
   697  You can verify whether CPU limit delegation is enabled by running the following command:
   698  
   699  ```console
   700  $ cat "/sys/fs/cgroup/user.slice/user-$(id -u).slice/user@$(id -u).service/cgroup.controllers"
   701  ```
   702  
   703  Example output might be:
   704  
   705      memory pids
   706  
   707  In the above example, `cpu` is not listed, which means the current user does
   708  not have permission to set CPU limits.
   709  
   710  If you want to enable CPU limit delegation for all users, you can create the
   711  file `/etc/systemd/system/user@.service.d/delegate.conf` with the contents:
   712  
   713  ```ini
   714  [Service]
   715  Delegate=memory pids cpu io
   716  ```
   717  
   718  After logging out and logging back in, you should have permission to set CPU
   719  limits.
   720  
   721  ### 26) `exec container process '/bin/sh': Exec format error` (or another binary than `bin/sh`)
   722  
   723  This can happen when running a container from an image for another architecture than the one you are running on.
   724  
   725  For example, if a remote repository only has, and thus send you, a `linux/arm64` _OS/ARCH_ but you run on `linux/amd64` (as happened in https://github.com/openMF/community-app/issues/3323 due to https://github.com/timbru31/docker-ruby-node/issues/564).
   726  
   727  ### 27) `Error: failed to create sshClient: Connection to bastion host (ssh://user@host:22/run/user/.../podman/podman.sock) failed.: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain`
   728  
   729  In some situations where the client is not on the same machine as where the podman daemon is running the client key could be using a cipher not supported by the host. This indicates an issue with one's SSH config. Until remedied using podman over ssh
   730  with a pre-shared key will be impossible.
   731  
   732  #### Symptom
   733  
   734  The accepted ciphers per `/etc/crypto-policies/back-ends/openssh.config` are not one that was used to create the public/private key pair that was transferred over to the host for ssh authentication.
   735  
   736  You can confirm this is the case by attempting to connect to the host via `podman-remote info` from the client and simultaneously on the host running `journalctl -f` and watching for the error `userauth_pubkey: key type ssh-rsa not in PubkeyAcceptedAlgorithms [preauth]`.
   737  
   738  #### Solution
   739  
   740  Create a new key using a supported algorithm e.g. ecdsa:
   741  
   742  ```console
   743  $ ssh-keygen -t ecdsa -f ~/.ssh/podman
   744  ```
   745  
   746  Then copy the new id over:
   747  
   748  ```console
   749  $ ssh-copy-id -i ~/.ssh/podman.pub user@host
   750  ```
   751  
   752  And then re-add the connection (removing the old one if necessary):
   753  
   754  ```console
   755  $ podman-remote system connection add myuser --identity ~/.ssh/podman ssh://user@host/run/user/1000/podman/podman.sock
   756  ```
   757  
   758  And now this should work:
   759  
   760  ```console
   761  $ podman-remote info
   762  ```
   763  
   764  ### 28) Rootless CNI networking fails in RHEL with Podman v2.2.1 to v3.0.1.
   765  
   766  A failure is encountered when trying to use networking on a rootless
   767  container in Podman v2.2.1 through v3.0.1 on RHEL.  This error does not
   768  occur on other Linux distributions.
   769  
   770  #### Symptom
   771  
   772  A rootless container is created using a CNI network, but the `podman run` command
   773  returns an error that an image must be built.
   774  
   775  #### Solution
   776  
   777  In order to use a CNI network in a rootless container on RHEL,
   778  an Infra container image for CNI-in-slirp4netns must be created.  The
   779  instructions for building the Infra container image can be found for
   780  v2.2.1 [here](https://github.com/containers/podman/tree/v2.2.1-rhel/contrib/rootless-cni-infra),
   781  and for v3.0.1 [here](https://github.com/containers/podman/tree/v3.0.1-rhel/contrib/rootless-cni-infra).
   782  
   783  ### 29) Container related firewall rules are lost after reloading firewalld
   784  Container network can't be reached after `firewall-cmd --reload` and `systemctl restart firewalld` Running `podman network reload` will fix it but it has to be done manually.
   785  
   786  #### Symptom
   787  The firewall rules created by podman are lost when the firewall is reloaded.
   788  
   789  #### Solution
   790  [@ranjithrajaram](https://github.com/containers/podman/issues/5431#issuecomment-847758377) has created a systemd-hook to fix this issue
   791  
   792  1) For "firewall-cmd --reload", create a systemd unit file with the following
   793  ```ini
   794  [Unit]
   795  Description=firewalld reload hook - run a hook script on firewalld reload
   796  Wants=dbus.service
   797  After=dbus.service
   798  
   799  [Service]
   800  Type=simple
   801  ExecStart=/bin/bash -c '/bin/busctl monitor --system --match "interface=org.fedoraproject.FirewallD1,member=Reloaded" --match "interface=org.fedoraproject.FirewallD1,member=PropertiesChanged" | while read -r line ; do podman network reload --all ; done'
   802  
   803  [Install]
   804  WantedBy=default.target
   805  ```
   806  
   807  2) For "systemctl restart firewalld", create a systemd unit file with the following
   808  ```ini
   809  [Unit]
   810  Description=podman network reload
   811  Wants=firewalld.service
   812  After=firewalld.service
   813  PartOf=firewalld.service
   814  
   815  [Service]
   816  Type=simple
   817  RemainAfterExit=yes
   818  ExecStart=/usr/bin/podman network reload --all
   819  
   820  [Install]
   821  WantedBy=default.target
   822  ```
   823  
   824  However, If you use busctl monitor then you can't get machine-readable output on RHEL 8.
   825  Since it doesn't have `busctl -j` as mentioned here by [@yrro](https://github.com/containers/podman/issues/5431#issuecomment-896943018).
   826  
   827  For RHEL 8, you can use the following one-liner bash script.
   828  ```ini
   829  [Unit]
   830  Description=Redo podman NAT rules after firewalld starts or reloads
   831  Wants=dbus.service
   832  After=dbus.service
   833  Requires=firewalld.service
   834  
   835  [Service]
   836  Type=simple
   837  ExecStart=/bin/bash -c "dbus-monitor --profile --system 'type=signal,sender=org.freedesktop.DBus,path=/org/freedesktop/DBus,interface=org.freedesktop.DBus,member=NameAcquired,arg0=org.fedoraproject.FirewallD1' 'type=signal,path=/org/fedoraproject/FirewallD1,interface=org.fedoraproject.FirewallD1,member=Reloaded' | sed -u '/^#/d' | while read -r type timestamp serial sender destination path interface member _junk; do if [[ $type = '#'* ]]; then continue; elif [[ $interface = org.freedesktop.DBus && $member = NameAcquired ]]; then echo 'firewalld started'; podman network reload --all; elif [[ $interface = org.fedoraproject.FirewallD1 && $member = Reloaded ]]; then echo 'firewalld reloaded'; podman network reload --all; fi; done"
   838  Restart=Always
   839  
   840  [Install]
   841  WantedBy=default.target
   842  ```
   843  `busctl-monitor` is almost usable in RHEL 8, except that it always outputs two bogus events when it starts up,
   844  one of which is (in its only machine-readable format) indistinguishable from the `NameOwnerChanged` that you get when firewalld starts up.
   845  This means you would get an extra `podman network reload --all` when this unit starts.
   846  
   847  Apart from this, you can use the following systemd service with the python3 code.
   848  
   849  ```ini
   850  [Unit]
   851  Description=Redo podman NAT rules after firewalld starts or reloads
   852  Wants=dbus.service
   853  Requires=firewalld.service
   854  After=dbus.service
   855  
   856  [Service]
   857  Type=simple
   858  ExecStart=/usr/bin/python  /path/to/python/code/podman-redo-nat.py
   859  Restart=always
   860  
   861  [Install]
   862  WantedBy=default.target
   863  ```
   864  The code reloads podman network twice when you use `systemctl restart firewalld`.
   865  ```python3
   866  import dbus
   867  from gi.repository import GLib
   868  from dbus.mainloop.glib import DBusGMainLoop
   869  import subprocess
   870  import sys
   871  
   872  # I'm a bit confused on the return values in the code
   873  # Not sure if they are needed.
   874  
   875  def reload_podman_network():
   876      try:
   877          subprocess.run(["podman","network","reload","--all"],timeout=90)
   878          # I'm not sure about this part
   879          sys.stdout.write("podman network reload done\n")
   880          sys.stdout.flush()
   881      except subprocess.TimeoutExpired as t:
   882          sys.stderr.write(f"Podman reload failed due to Timeout {t}")
   883      except subprocess.CalledProcessError as e:
   884          sys.stderr.write(f"Podman reload failed due to {e}")
   885      except Exception as e:
   886          sys.stderr.write(f"Podman reload failed with an Unhandled Exception {e}")
   887  
   888      return False
   889  
   890  def signal_handler(*args, **kwargs):
   891      if kwargs.get('member') == "Reloaded":
   892          reload_podman_network()
   893      elif kwargs.get('member') == "NameOwnerChanged":
   894          reload_podman_network()
   895      else:
   896          return None
   897      return None
   898  
   899  def signal_listener():
   900      try:
   901          DBusGMainLoop(set_as_default=True)# Define the loop.
   902          loop = GLib.MainLoop()
   903          system_bus = dbus.SystemBus()
   904          # Listens to systemctl restart firewalld with a filter added, will cause podman network to be reloaded twice
   905          system_bus.add_signal_receiver(signal_handler,dbus_interface='org.freedesktop.DBus',arg0='org.fedoraproject.FirewallD1',member_keyword='member')
   906          # Listens to firewall-cmd --reload
   907          system_bus.add_signal_receiver(signal_handler,dbus_interface='org.fedoraproject.FirewallD1',signal_name='Reloaded',member_keyword='member')
   908          loop.run()
   909      except KeyboardInterrupt:
   910          loop.quit()
   911          sys.exit(0)
   912      except Exception as e:
   913          loop.quit()
   914          sys.stderr.write(f"Error occurred {e}")
   915          sys.exit(1)
   916  
   917  if __name__ == "__main__":
   918      signal_listener()
   919  ```
   920  
   921  ### 30) Podman run fails with `ERRO[0000] XDG_RUNTIME_DIR directory "/run/user/0" is not owned by the current user` or `Error: error creating tmpdir: mkdir /run/user/1000: permission denied`.
   922  
   923  A failure is encountered when performing `podman run` with a warning `XDG_RUNTIME_DIR is pointing to a path which is not writable. Most likely podman will fail.`
   924  
   925  #### Symptom
   926  
   927  A rootless container is being invoked with cgroup configuration as `cgroupv2` for user with missing or invalid **systemd session**.
   928  
   929  Example cases
   930  ```console
   931  # su user1 -c 'podman images'
   932  ERRO[0000] XDG_RUNTIME_DIR directory "/run/user/0" is not owned by the current user
   933  ```
   934  ```console
   935  # su - user1 -c 'podman images'
   936  Error: error creating tmpdir: mkdir /run/user/1000: permission denied
   937  ```
   938  
   939  #### Solution
   940  
   941  Podman expects a valid login session for the `rootless+cgroupv2` use-case. Podman execution is expected to fail if the login session is not present. In most cases, podman will figure out a solution on its own but if `XDG_RUNTIME_DIR` is pointing to a path that is not writable execution will most likely fail. Typical scenarios of such cases are seen when users are trying to use Podman with `su - <user> -c '<podman-command>'`, or `sudo -l` and badly configured systemd session.
   942  
   943  Alternatives:
   944  
   945  * Execute Podman via __systemd-run__ that will first start a systemd login session:
   946  
   947    ```console
   948    $ sudo systemd-run --machine=username@ --quiet --user --collect --pipe --wait podman run --rm docker.io/library/alpine echo hello
   949    ```
   950  * Start an interactive shell in a systemd login session with the command `machinectl shell <username>@`
   951    and then run Podman
   952  
   953    ```console
   954    $ sudo -i
   955    # machinectl shell username@
   956    Connected to the local host. Press ^] three times within 1s to exit session.
   957    $ podman run --rm docker.io/library/alpine echo hello
   958    ```
   959  * Start a new systemd login session by logging in with `ssh` i.e. `ssh <username>@localhost` and then run Podman.
   960  
   961  * Before invoking Podman command create a valid login session for your rootless user using `loginctl enable-linger <username>`
   962  
   963  ### 31) 127.0.0.1:7777 port already bound
   964  
   965  After deleting a VM on macOS, the initialization of subsequent VMs fails.
   966  
   967  #### Symptom
   968  
   969  After deleting a client VM on macOS via `podman machine stop` && `podman machine rm`, attempting to `podman machine init` a new client VM leads to an error with the 127.0.0.1:7777 port already bound.
   970  
   971  #### Solution
   972  
   973  You will need to remove the hanging gv-proxy process bound to the port in question. For example, if the port mentioned in the error message is 127.0.0.1:7777, you can use the command `kill -9 $(lsof -i:7777)` in order to identify and remove the hanging process which prevents you from starting a new VM on that default port.
   974  
   975  ### 32) The sshd process fails to run inside of the container.
   976  
   977  #### Symptom
   978  
   979  The sshd process running inside the container fails with the error
   980  "Error writing /proc/self/loginuid".
   981  
   982  ### Solution
   983  
   984  If the `/proc/self/loginuid` file is already initialized then the
   985  `CAP_AUDIT_CONTROL` capability is required to override it.
   986  
   987  This happens when running Podman from a user session since the
   988  `/proc/self/loginuid` file is already initialized.  The solution is to
   989  run Podman from a system service, either using the Podman service, and
   990  then using podman -remote to start the container or simply by running
   991  something like `systemd-run podman run ...`.  In this case the
   992  container will only need `CAP_AUDIT_WRITE`.
   993  
   994  ### 33) Container creates a file that is not owned by the user's regular UID
   995  
   996  After running a container with rootless Podman, the non-root user sees a numerical UID and GID instead of a username and groupname.
   997  
   998  #### Symptom
   999  
  1000  When listing file permissions with `ls -l` on the host in a directory that was passed as `--volume /some/dir` to `podman run`,
  1001  the UID and GID are displayed rather than the corresponding username and groupname. The UID and GID numbers displayed are
  1002  from the user's subordinate UID and GID ranges on the host system.
  1003  
  1004  An example
  1005  
  1006  ```console
  1007  $ mkdir dir1
  1008  $ chmod 777 dir1
  1009  $ podman run --rm -v ./dir1:/dir1:Z \
  1010               --user 2003:2003 \
  1011               docker.io/library/ubuntu bash -c "touch /dir1/a; chmod 600 /dir1/a"
  1012  $ ls -l dir1/a
  1013  -rw-------. 1 102002 102002 0 Jan 19 19:35 dir1/a
  1014  $ less dir1/a
  1015  less: dir1/a: Permission denied
  1016  ```
  1017  
  1018  #### Solution
  1019  
  1020  If you want to read, chown, or remove such a file, enter a user
  1021  namespace. Instead of running commands such as `less dir1/a` or `rm dir1/a`, you
  1022  need to prepend the command-line with `podman unshare`, i.e.,
  1023  `podman unshare less dir1/a` or `podman unshare rm dir1/a`. To change the ownership
  1024  of the file `dir1/a` to your regular user's UID and GID, run `podman unshare chown 0:0 dir1/a`.
  1025  A file having the ownership `0:0` in the user namespace is owned by the regular
  1026  user on the host. To use Bash features, such as variable expansion and
  1027  globbing, you need to wrap the command with `bash -c`, e.g.
  1028  `podman unshare bash -c 'ls $HOME/dir1/a*'`.
  1029  
  1030  Would it have been possible to run Podman in another way so that your regular
  1031  user would have become the owner of the file? Yes, you can use the options
  1032  __--uidmap__ and __--gidmap__ to change how UIDs and GIDs are mapped
  1033  between the container and the host. Let's try it out.
  1034  
  1035  In the example above `ls -l` shows the UID 102002 and GID 102002. Set shell variables
  1036  
  1037  ```console
  1038  $ uid_from_ls=102002
  1039  $ gid_from_ls=102002
  1040  ```
  1041  
  1042  Set shell variables to the lowest subordinate UID and GID
  1043  
  1044  ```console
  1045  $ lowest_subuid=$(podman info --format "{{ (index .Host.IDMappings.UIDMap 1).HostID }}")
  1046  $ lowest_subgid=$(podman info --format "{{ (index .Host.IDMappings.GIDMap 1).HostID }}")
  1047  ```
  1048  
  1049  Compute the UID and GID inside the container that map to the owner of the created file on the host.
  1050  
  1051  ```console
  1052  $ uid=$(( $uid_from_ls - $lowest_subuid + 1))
  1053  $ gid=$(( $gid_from_ls - $lowest_subgid + 1))
  1054  ```
  1055  (In the computation it was assumed that there is only one subuid range and one subgid range)
  1056  
  1057  ```console
  1058  $ echo $uid
  1059  2003
  1060  $ echo $gid
  1061  2003
  1062  ```
  1063  
  1064  The computation shows that the UID is `2003` and the GID is `2003` inside the container.
  1065  This comes as no surprise as this is what was specified before with `--user=2003:2003`,
  1066  but the same computation could be used whenever a username is specified
  1067  or the `--user` option is not used.
  1068  
  1069  Run the container again but now with UIDs and GIDs mapped
  1070  
  1071  ```console
  1072  $ subuidSize=$(( $(podman info --format "{{ range .Host.IDMappings.UIDMap }}+{{.Size }}{{end }}" ) - 1 ))
  1073  $ subgidSize=$(( $(podman info --format "{{ range .Host.IDMappings.GIDMap }}+{{.Size }}{{end }}" ) - 1 ))
  1074  $ mkdir dir1
  1075  $ chmod 777 dir1
  1076  $ podman run --rm
  1077    -v ./dir1:/dir1:Z \
  1078    --user $uid:$gid \
  1079    --uidmap $uid:0:1 \
  1080    --uidmap 0:1:$uid \
  1081    --uidmap $(($uid+1)):$(($uid+1)):$(($subuidSize-$uid)) \
  1082    --gidmap $gid:0:1 \
  1083    --gidmap 0:1:$gid \
  1084    --gidmap $(($gid+1)):$(($gid+1)):$(($subgidSize-$gid)) \
  1085       docker.io/library/ubuntu bash -c "touch /dir1/a; chmod 600 /dir1/a"
  1086  $ id -u
  1087  tester
  1088  $ id -g
  1089  tester
  1090  $ ls -l dir1/a
  1091  -rw-------. 1 tester tester 0 Jan 19 20:31 dir1/a
  1092  $
  1093  ```
  1094  
  1095  In this example the `--user` option specified a rootless user in the container.
  1096  As the rootless user could also have been specified in the container image, e.g.
  1097  
  1098  ```console
  1099  $ podman image inspect --format "user: {{.User}}" IMAGE
  1100  user: hpc
  1101  ```
  1102  the same problem could also occur even without specifying `--user`.
  1103  
  1104  Another variant of the same problem could occur when using
  1105  `--user=root:root` (the default), but where the root user creates non-root owned files
  1106  in some way (e.g by creating them themselves, or switching the effective UID to
  1107  a rootless user and then creates files).
  1108  
  1109  ### 34) Passed-in devices or files can't be accessed in rootless container (UID/GID mapping problem)
  1110  
  1111  As a non-root user you have access rights to devices, files and directories that you
  1112  want to pass into a rootless container with `--device=...`, `--volume=...` or `--mount=..`.
  1113  
  1114  Podman by default maps a non-root user inside a container to one of the user's
  1115  subordinate UIDs and subordinate GIDs on the host. When the container user tries to access a
  1116  file, a "Permission denied" error could occur because the container user does not have the
  1117  permissions of the regular user of the host.
  1118  
  1119  #### Symptom
  1120  
  1121  * Any access inside the container is rejected with "Permission denied"
  1122  for files, directories or devices passed in to the container
  1123  with `--device=..`,`--volume=..` or `--mount=..`, e.g.
  1124  
  1125  ```console
  1126  $ mkdir dir1
  1127  $ chmod 700 dir1
  1128  $ podman run --rm -v ./dir1:/dir1:Z \
  1129               --user 2003:2003 \
  1130               docker.io/library/ubuntu ls /dir1
  1131  ls: cannot open directory '/dir1': Permission denied
  1132  ```
  1133  
  1134  #### Solution
  1135  
  1136  We follow essentially the same solution as in the previous
  1137  troubleshooting tip:
  1138  
  1139      Container creates a file that is not owned by the regular UID
  1140  
  1141  but for this problem the container UID and GID can't be as
  1142  easily computed by mere addition and subtraction.
  1143  
  1144  In other words, it might be more challenging to find out the UID and
  1145  the GID inside the container that we want to map to the regular
  1146  user on the host.
  1147  
  1148  If the `--user` option is used together with a numerical UID and GID
  1149  to specify a rootless user, we already know the answer.
  1150  
  1151  If the `--user` option is used together with a username and groupname,
  1152  we could look up the UID and GID in the file `/etc/passwd` of the container.
  1153  
  1154  If the container user is not set via `--user` but instead from the
  1155  container image, we could inspect the container image
  1156  
  1157  ```console
  1158  $ podman image inspect --format "user: {{.User}}" IMAGE
  1159  user: hpc
  1160  ```
  1161  
  1162  and then look it up in `/etc/passwd` of the container.
  1163  
  1164  If the problem occurs in a container that is started to run as root but later
  1165  switches to an effictive UID of a rootless user, it might be less
  1166  straightforward to find out the UID and the GID. Reading the
  1167  `Containerfile`, `Dockerfile` or the `/etc/passwd` could give a clue.
  1168  
  1169  To run the container with the rootless container UID and GID mapped to the
  1170  user's regular UID and GID on the host follow these steps:
  1171  
  1172  Set the `uid` and `gid` shell variables in a Bash shell to the UID and GID
  1173  of the user that will be running inside the container, e.g.
  1174  
  1175  ```console
  1176  $ uid=2003
  1177  $ gid=2003
  1178  ```
  1179  
  1180  and run
  1181  
  1182  ```console
  1183  $ mkdir dir1
  1184  $ echo hello > dir1/file.txt
  1185  $ chmod 700 dir1/file.txt
  1186  $ subuidSize=$(( $(podman info --format "{{ range .Host.IDMappings.UIDMap }}+{{.Size }}{{end }}" ) - 1 ))
  1187  $ subgidSize=$(( $(podman info --format "{{ range .Host.IDMappings.GIDMap }}+{{.Size }}{{end }}" ) - 1 ))
  1188  $ podman run --rm \
  1189    -v ./dir1:/dir1:Z \
  1190    --user $uid:$gid \
  1191    --uidmap $uid:0:1 \
  1192    --uidmap 0:1:$uid \
  1193    --uidmap $(($uid+1)):$(($uid+1)):$(($subuidSize-$uid)) \
  1194    --gidmap $gid:0:1 \
  1195    --gidmap 0:1:$gid \
  1196    --gidmap $(($gid+1)):$(($gid+1)):$(($subgidSize-$gid)) \
  1197    docker.io/library/alpine cat /dir1/file.txt
  1198  hello
  1199  ```
  1200  
  1201  A side-note: Using [__--userns=keep-id__](https://docs.podman.io/en/latest/markdown/podman-run.1.html#userns-mode)
  1202  can sometimes be an alternative solution, but it forces the regular
  1203  user's host UID to be mapped to the same UID inside the container
  1204  so it provides less flexibility than using `--uidmap` and `--gidmap`.
  1205  
  1206  ### 35) Images in the additional stores can be deleted even if there are containers using them
  1207  
  1208  When an image in an additional store is used, it is not locked thus it
  1209  can be deleted even if there are containers using it.
  1210  
  1211  #### Symptom
  1212  
  1213  WARN[0000] Can't stat lower layer "/var/lib/containers/storage/overlay/l/7HS76F2P5N73FDUKUQAOJA3WI5" because it does not exist. Going through storage to recreate the missing symlinks.
  1214  
  1215  #### Solution
  1216  
  1217  It is the user responsibility to make sure images in an additional
  1218  store are not deleted while being used by containers in another
  1219  store.
  1220  
  1221  ### 36) Syncing bugfixes for podman-remote or setups using Podman API
  1222  
  1223  After upgrading Podman to a newer version an issue with the earlier version of Podman still presents itself while using podman-remote.
  1224  
  1225  #### Symptom
  1226  
  1227  While running podman remote commands with the most updated Podman, issues that were fixed in a prior version of Podman can arise either on the Podman client side or the Podman server side.
  1228  
  1229  #### Solution
  1230  
  1231  When upgrading Podman to a particular version for the required fixes, users often make the mistake of only upgrading the Podman client. However, suppose a setup uses `podman-remote` or uses a client that communicates with the Podman server on a remote machine via the REST API. In that case, it is required to upgrade both the Podman client and the Podman server running on the remote machine. Both the Podman client and server must be upgraded to the same version.
  1232  
  1233  Example: If a particular bug was fixed in `v4.1.0` then the Podman client must have version `v4.1.0` as well the Podman server must have version `v4.1.0`.