github.com/containers/podman/v5@v5.1.0-rc1/troubleshooting.md (about)

     1  ![PODMAN logo](https://raw.githubusercontent.com/containers/common/main/logos/podman-logo-full-vert.png)
     2  
     3  # Troubleshooting
     4  
     5  ## A list of common issues and solutions for Podman
     6  
     7  ---
     8  ### 1) Variety of issues - Validate Version
     9  
    10  A large number of issues reported against Podman are often found to already be fixed
    11  in more current versions of the project.  Before reporting an issue, please verify the
    12  version you are running with `podman version` and compare it to the latest release
    13  documented on the top of Podman's [README.md](README.md).
    14  
    15  If they differ, please update your version of PODMAN to the latest possible
    16  and retry your command before reporting the issue.
    17  
    18  ---
    19  ### 2) Can't use volume mount, get permission denied
    20  
    21  ```console
    22  $ podman run -v ~/mycontent:/content fedora touch /content/file
    23  touch: cannot touch '/content/file': Permission denied
    24  ```
    25  
    26  #### Solution
    27  
    28  This is sometimes caused by SELinux, and sometimes by user namespaces.
    29  
    30  Labeling systems like SELinux require that proper labels are placed on volume
    31  content mounted into a container. Without a label, the security system might
    32  prevent the processes running inside the container from using the content. By
    33  default, Podman does not change the labels set by the OS.
    34  
    35  To change a label in the container context, you can add either of two suffixes
    36  **:z** or **:Z** to the volume mount. These suffixes tell Podman to relabel file
    37  objects on the shared volumes. The **z** option tells Podman that two containers
    38  share the volume content. As a result, Podman labels the content with a shared
    39  content label. Shared volume labels allow all containers to read/write content.
    40  The **Z** option tells Podman to label the content with a private unshared label.
    41  Only the current container can use a private volume.
    42  
    43  ```console
    44  $ podman run -v ~/mycontent:/content:Z fedora touch /content/file
    45  ```
    46  
    47  Make sure the content is private for the container.  Do not relabel system directories and content.
    48  Relabeling system content might cause other confined services on your machine to fail.  For these
    49  types of containers we recommend having SELinux separation disabled.  The option `--security-opt label=disable`
    50  will disable SELinux separation for the container.
    51  
    52  ```console
    53  $ podman run --security-opt label=disable -v ~:/home/user fedora touch /home/user/file
    54  ```
    55  
    56  In cases where the container image runs as a specific, non-root user, though, the
    57  solution is to fix the user namespace.  This would include container images such as
    58  the Jupyter Notebook image (which runs as "jovyan") and the Postgres image (which runs
    59  as "postgres").  In either case, use the `--userns` switch to map user namespaces,
    60  most of the time by using the **keep-id** option.
    61  
    62  ```console
    63  $ podman run -v "$PWD":/home/jovyan/work --userns=keep-id jupyter/scipy-notebook
    64  ```
    65  
    66  ---
    67  ### 3) No such image or Bare keys cannot contain ':'
    68  
    69  When doing a `podman pull` or `podman build` command and a "common" image cannot be pulled,
    70  it is likely that the `/etc/containers/registries.conf` file is either not installed or possibly
    71  misconfigured.
    72  
    73  #### Symptom
    74  
    75  ```console
    76  $ sudo podman build -f Dockerfile
    77  STEP 1: FROM alpine
    78  error building: error creating build container: no such image "alpine" in registry: image not known
    79  ```
    80  
    81  or
    82  
    83  ```console
    84  $ sudo podman pull fedora
    85  error pulling image "fedora": unable to pull fedora: error getting default registries to try: Near line 9 (last key parsed ''): Bare keys cannot contain ':'.
    86  ```
    87  
    88  #### Solution
    89  
    90    * Verify that the `/etc/containers/registries.conf` file exists.  If not, verify that the containers-common package is installed.
    91    * Verify that the entries in the `unqualified-search-registries` list of the `/etc/containers/registries.conf` file are valid and reachable.
    92      * i.e. `unqualified-search-registries = ["registry.fedoraproject.org", "quay.io", "registry.access.redhat.com"]`
    93  
    94  ---
    95  ### 4) http: server gave HTTP response to HTTPS client
    96  
    97  When doing a Podman command such as `build`, `commit`, `pull`, or `push` to a registry,
    98  TLS verification is turned on by default.  If encryption is not used with
    99  those commands, this error can occur.
   100  
   101  #### Symptom
   102  
   103  ```console
   104  $ sudo podman push alpine docker://localhost:5000/myalpine:latest
   105  Getting image source signatures
   106  Get https://localhost:5000/v2/: http: server gave HTTP response to HTTPS client
   107  ```
   108  
   109  #### Solution
   110  
   111  By default TLS verification is turned on when communicating to registries from
   112  Podman.  If the registry does not require encryption the Podman commands
   113  such as `build`, `commit`, `pull` and `push` will fail unless TLS verification is turned
   114  off using the `--tls-verify` option.  **NOTE:** It is not at all recommended to
   115  communicate with a registry and not use TLS verification.
   116  
   117    * Turn off TLS verification by passing false to the tls-verify option.
   118    * I.e. `podman push --tls-verify=false alpine docker://localhost:5000/myalpine:latest`
   119  
   120  
   121  For a global workaround, users[1] can create the file `/etc/containers/registries.conf.d/registry-NAME.conf`
   122  (replacing NAME with the name of this registry) with the following content (replacing FULLY.QUALIFIED.NAME.OF.REGISTRY with the address of this registry):
   123  
   124  ```
   125  [[registry]]
   126  location = "FULLY.QUALIFIED.NAME.OF.REGISTRY"
   127  insecure = true
   128  ```
   129  
   130  [1] If you are using a Mac / Windows, you should execute `podman machine ssh` to login into podman machine before adding the insecure entry to the registry—conf file.
   131  
   132  **This is an insecure method and should be used cautiously.**
   133  
   134  ---
   135  ### 5) rootless containers cannot ping hosts
   136  
   137  When using the ping command from a non-root container, the command may
   138  fail because of a lack of privileges.
   139  
   140  #### Symptom
   141  
   142  ```console
   143  $ podman run --rm fedora ping -W10 -c1 redhat.com
   144  PING redhat.com (209.132.183.105): 56 data bytes
   145  
   146  --- redhat.com ping statistics ---
   147  1 packets transmitted, 0 packets received, 100% packet loss
   148  ```
   149  
   150  #### Solution
   151  
   152  It is most likely necessary to enable unprivileged pings on the host.
   153  Be sure the UID of the user is part of the range in the
   154  `/proc/sys/net/ipv4/ping_group_range` file.
   155  
   156  To change its value you can use something like:
   157  
   158  ```console
   159  # sysctl -w "net.ipv4.ping_group_range=0 2000000"
   160  ```
   161  
   162  To make the change persistent, you'll need to add a file in
   163  `/etc/sysctl.d` that contains `net.ipv4.ping_group_range=0 $MAX_UID`.
   164  
   165  ---
   166  ### 6) Build hangs when the Dockerfile contains the useradd command
   167  
   168  When the Dockerfile contains a command like `RUN useradd -u 99999000 -g users newuser` the build can hang.
   169  
   170  #### Symptom
   171  
   172  If you are using a useradd command within a Dockerfile with a large UID/GID, it will create a large sparse file `/var/log/lastlog`.  This can cause the build to hang forever.  Go language does not support sparse files correctly, which can lead to some huge files being created in your container image.
   173  
   174  #### Solution
   175  
   176  If the entry in the Dockerfile looked like: RUN useradd -u 99999000 -g users newuser then add the `--no-log-init` parameter to change it to: `RUN useradd --no-log-init -u 99999000 -g users newuser`. This option tells useradd to stop creating the lastlog file.
   177  
   178  ### 7) Permission denied when running Podman commands
   179  
   180  When rootless Podman attempts to execute a container on a non exec home directory a permission error will be raised.
   181  
   182  #### Symptom
   183  
   184  If you are running Podman or Buildah on a home directory that is mounted noexec,
   185  then they will fail with a message like:
   186  
   187  ```console
   188  $ podman run centos:7
   189  standard_init_linux.go:203: exec user process caused "permission denied"
   190  ```
   191  
   192  #### Solution
   193  
   194  Since the administrator of the system set up your home directory to be noexec, you will not be allowed to execute containers from storage in your home directory. It is possible to work around this by manually specifying a container storage path that is not on a noexec mount. Simply copy the file /etc/containers/storage.conf to ~/.config/containers/ (creating the directory if necessary). Specify a graphroot directory which is not on a noexec mount point and to which you have read/write privileges.  You will need to modify other fields to writable directories as well.
   195  
   196  For example
   197  
   198  ```console
   199  $ cat ~/.config/containers/storage.conf
   200  [storage]
   201    driver = "overlay"
   202    runroot = "/run/user/1000"
   203    graphroot = "/execdir/myuser/storage"
   204    [storage.options]
   205      mount_program = "/bin/fuse-overlayfs"
   206  ```
   207  
   208  ### 8) Permission denied when running systemd within a Podman container
   209  
   210  When running systemd as PID 1 inside of a container on an SELinux
   211  separated machine, it needs to write to the cgroup file system.
   212  
   213  #### Symptom
   214  
   215  Systemd gets permission denied when attempting to write to the cgroup file
   216  system, and AVC messages start to show up in the audit.log file or journal on
   217  the system.
   218  
   219  #### Solution
   220  
   221  Newer versions of Podman (2.0 or greater) support running init based containers
   222  with a different SELinux labels, which allow the container process access to the
   223  cgroup file system. This feature requires container-selinux-2.132 or newer
   224  versions.
   225  
   226  Prior to Podman 2.0, the SELinux boolean `container_manage_cgroup` allows
   227  container processes to write to the cgroup file system. Turn on this boolean,
   228  on SELinux separated systems, to allow systemd to run properly in the container.
   229  Only do this on systems running older versions of Podman.
   230  
   231  ```console
   232  # setsebool -P container_manage_cgroup true
   233  ```
   234  
   235  ### 9) Newuidmap missing when running rootless Podman commands
   236  
   237  Rootless Podman requires the newuidmap and newgidmap programs to be installed.
   238  
   239  #### Symptom
   240  
   241  If you are running Podman or Buildah as a rootless user, you get an error complaining about
   242  a missing newuidmap executable.
   243  
   244  ```console
   245  $ podman run -ti fedora sh
   246  command required for rootless mode with multiple IDs: exec: "newuidmap": executable file not found in $PATH
   247  ```
   248  
   249  #### Solution
   250  
   251  Install a version of shadow-utils that includes these executables.  Note that for RHEL and CentOS 7, at least the 7.7 release must be installed for support to be available.
   252  
   253  ### 10) rootless setup user: invalid argument
   254  
   255  Rootless Podman requires the user running it to have a range of UIDs listed in /etc/subuid and /etc/subgid.
   256  
   257  #### Symptom
   258  
   259  A user, either via --user or through the default configured for the image, is not mapped inside the namespace.
   260  
   261  ```console
   262  $ podman run --rm -ti --user 1000000 alpine echo hi
   263  Error: container create failed: container_linux.go:344: starting container process caused "setup user: invalid argument"
   264  ```
   265  
   266  #### Solution
   267  
   268  Update the /etc/subuid and /etc/subgid with fields for users that look like:
   269  
   270  ```console
   271  $ cat /etc/subuid
   272  johndoe:100000:65536
   273  test:165536:65536
   274  ```
   275  
   276  The format of this file is `USERNAME:UID:RANGE`
   277  
   278  * username as listed in `/etc/passwd` or `getpwent`.
   279  * The initial uid allocated for the user.
   280  * The size of the range of UIDs allocated for the user.
   281  
   282  This means johndoe is allocated UIDs 100000-165535 as well as his standard UID in the
   283  `/etc/passwd` file.
   284  
   285  You should ensure that each user has a unique range of UIDs, because overlapping UIDs,
   286  would potentially allow one user to attack another user. In addition, make sure
   287  that the range of UIDs you allocate can cover all UIDs that the container
   288  requires. For example, if the container has a user with UID 10000, ensure you
   289  have at least 10000 subuids, and if the container needs to be run as a user with
   290  UID 1000000, ensure you have at least 1000000 subuids.
   291  
   292  You could also use the `usermod` program to assign UIDs to a user.
   293  
   294  If you update either the `/etc/subuid` or `/etc/subgid` file, you need to
   295  stop all running containers and kill the pause process.  This is done
   296  automatically by the `system migrate` command, which can also be used
   297  to stop all the containers and kill the pause process.
   298  
   299  ```console
   300  # usermod --add-subuids 200000-201000 --add-subgids 200000-201000 johndoe
   301  # grep johndoe /etc/subuid /etc/subgid
   302  /etc/subuid:johndoe:200000:1001
   303  /etc/subgid:johndoe:200000:1001
   304  ```
   305  
   306  ### 11) Changing the location of the Graphroot leads to permission denied
   307  
   308  When I change the graphroot storage location in storage.conf, the next time I
   309  run Podman, I get an error like:
   310  
   311  ```console
   312  # podman run -p 5000:5000 -it centos bash
   313  
   314  bash: error while loading shared libraries: /lib64/libc.so.6: cannot apply additional memory protection after relocation: Permission denied
   315  ```
   316  
   317  For example, the admin sets up a spare disk to be mounted at `/src/containers`,
   318  and points storage.conf at this directory.
   319  
   320  
   321  #### Symptom
   322  
   323  SELinux blocks containers from using arbitrary locations for overlay storage.
   324  These directories need to be labeled with the same labels as if the content was
   325  under `/var/lib/containers/storage`.
   326  
   327  #### Solution
   328  
   329  Tell SELinux about the new containers storage by setting up an equivalence record.
   330  This tells SELinux to label content under the new path, as if it was stored
   331  under `/var/lib/containers/storage`.
   332  
   333  ```console
   334  # semanage fcontext -a -e /var/lib/containers /srv/containers
   335  # restorecon -R -v /srv/containers
   336  ```
   337  
   338  The semanage command above tells SELinux to set up the default labeling of
   339  `/srv/containers` to match `/var/lib/containers`.  The `restorecon` command
   340  tells SELinux to apply the labels to the actual content.
   341  
   342  Now all new content created in these directories will automatically be created
   343  with the correct label.
   344  
   345  ### 12) Anonymous image pull fails with 'invalid username/password'
   346  
   347  Pulling an anonymous image that doesn't require authentication can result in an
   348  `invalid username/password` error.
   349  
   350  #### Symptom
   351  
   352  If you pull an anonymous image, one that should not require credentials, you can receive
   353  an `invalid username/password` error if you have credentials established in the
   354  authentication file for the target container registry that are no longer valid.
   355  
   356  ```console
   357  $ podman run -it --rm docker://docker.io/library/alpine:latest ls
   358  Trying to pull docker://docker.io/library/alpine:latest...ERRO[0000] Error pulling image ref //alpine:latest: Error determining manifest MIME type for docker://alpine:latest: unable to retrieve auth token: invalid username/password
   359  Failed
   360  Error: unable to pull docker://docker.io/library/alpine:latest: unable to pull image: Error determining manifest MIME type for docker://alpine:latest: unable to retrieve auth token: invalid username/password
   361  ```
   362  
   363  This can happen if the authentication file is modified 'by hand' or if the credentials
   364  are established locally and then the password is updated later in the container registry.
   365  
   366  #### Solution
   367  
   368  Depending upon which container tool was used to establish the credentials, use `podman logout`
   369  or `docker logout` to remove the credentials from the authentication file.
   370  
   371  ### 13) Running Podman inside a container causes container crashes and inconsistent states
   372  
   373  Running Podman in a container and forwarding some, but not all, of the required host directories can cause inconsistent container behavior.
   374  
   375  #### Symptom
   376  
   377  After creating a container with Podman's storage directories mounted in from the host and running Podman inside a container, all containers show their state as "configured" or "created", even if they were running or stopped.
   378  
   379  #### Solution
   380  
   381  When running Podman inside a container, it is recommended to mount at a minimum `/var/lib/containers/storage/` as a volume.
   382  Typically, you will not mount in the host version of the directory, but if you wish to share containers with the host, you can do so.
   383  If you do mount in the host's `/var/lib/containers/storage`, however, you must also mount in the host's `/run/libpod` and `/run/containers/storage` directories.
   384  Not doing this will cause Podman in the container to detect that temporary files have been cleared, leading it to assume a system restart has taken place.
   385  This can cause Podman to reset container states and lose track of running containers.
   386  
   387  For running containers on the host from inside a container, we also recommend the [Podman remote client](docs/tutorials/remote_client.md), which only requires a single socket to be mounted into the container.
   388  
   389  ### 14) Rootless 'podman build' fails EPERM on NFS:
   390  
   391  NFS enforces file creation on different UIDs on the server side and does not understand user namespace, which rootless Podman requires.
   392  When a container root process like YUM attempts to create a file owned by a different UID, NFS Server denies the creation.
   393  NFS is also a problem for the file locks when the storage is on it.  Other distributed file systems (for example: Lustre, Spectrum Scale, the General Parallel File System (GPFS)) are also not supported when running in rootless mode as these file systems do not understand user namespace.
   394  
   395  #### Symptom
   396  ```console
   397  $ podman build .
   398  ERRO[0014] Error while applying layer: ApplyLayer exit status 1 stdout:  stderr: open /root/.bash_logout: permission denied
   399  error creating build container: Error committing the finished image: error adding layer with blob "sha256:a02a4930cb5d36f3290eb84f4bfa30668ef2e9fe3a1fb73ec015fc58b9958b17": ApplyLayer exit status 1 stdout:  stderr: open /root/.bash_logout: permission denied
   400  ```
   401  
   402  #### Solution
   403  Choose one of the following:
   404    * Set up containers/storage in a different directory, not on an NFS share.
   405      * Create a directory on a local file system.
   406      * Edit `~/.config/containers/containers.conf` and point the `volume_path` option to that local directory. (Copy `/usr/share/containers/containers.conf` if `~/.config/containers/containers.conf` does not exist)
   407    * Otherwise just run Podman as root, via `sudo podman`
   408  
   409  ### 15) Rootless 'podman build' fails when using OverlayFS:
   410  
   411  The Overlay file system (OverlayFS) requires the ability to call the `mknod` command when creating whiteout files
   412  when extracting an image.  However, a rootless user does not have the privileges to use `mknod` in this capacity.
   413  
   414  #### Symptom
   415  ```console
   416  $ podman build --storage-driver overlay .
   417  STEP 1: FROM docker.io/ubuntu:xenial
   418  Getting image source signatures
   419  Copying blob edf72af6d627 done
   420  Copying blob 3e4f86211d23 done
   421  Copying blob 8d3eac894db4 done
   422  Copying blob f7277927d38a done
   423  Copying config 5e13f8dd4c done
   424  Writing manifest to image destination
   425  Storing signatures
   426  Error: creating build container: Error committing the finished image: error adding layer with blob "sha256:8d3eac894db4dc4154377ad28643dfe6625ff0e54bcfa63e0d04921f1a8ef7f8": Error processing tar file(exit status 1): operation not permitted
   427  $ podman build .
   428  ERRO[0014] Error while applying layer: ApplyLayer exit status 1 stdout:  stderr: open /root/.bash_logout: permission denied
   429  error creating build container: Error committing the finished image: error adding layer with blob "sha256:a02a4930cb5d36f3290eb84f4bfa30668ef2e9fe3a1fb73ec015fc58b9958b17": ApplyLayer exit status 1 stdout:  stderr: open /root/.bash_logout: permission denied
   430  ```
   431  
   432  #### Solution
   433  Choose one of the following:
   434    * Complete the build operation as a privileged user.
   435    * Install and configure fuse-overlayfs.
   436      * Install the fuse-overlayfs package for your Linux Distribution.
   437      * Add `mount_program = "/usr/bin/fuse-overlayfs"` under `[storage.options]` in your `~/.config/containers/storage.conf` file.
   438  
   439  ### 16) RHEL 7 and CentOS 7 based `init` images don't work with cgroup v2
   440  
   441  The systemd version shipped in RHEL 7 and CentOS 7 doesn't have support for cgroup v2.  Support for cgroup v2 requires version 230 of systemd or newer, which
   442  was never shipped or supported on RHEL 7 or CentOS 7.
   443  
   444  #### Symptom
   445  ```console
   446  # podman run --name test -d registry.access.redhat.com/rhel7-init:latest && sleep 10 && podman exec test systemctl status
   447  c8567461948439bce72fad3076a91ececfb7b14d469bfa5fbc32c6403185beff
   448  Failed to get D-Bus connection: Operation not permitted
   449  Error: non zero exit code: 1: OCI runtime error
   450  ```
   451  
   452  #### Solution
   453  You'll need to either:
   454  
   455  * configure the host to use cgroup v1. On Fedora you can do:
   456  
   457  ```console
   458  # dnf install -y grubby
   459  # grubby --update-kernel=ALL --args=”systemd.unified_cgroup_hierarchy=0"
   460  # reboot
   461  ```
   462  
   463  * update the image to use an updated version of systemd.
   464  
   465  ### 17) rootless containers exit once the user session exits
   466  
   467  You need to set lingering mode through loginctl to prevent user processes to be killed once
   468  the user session completed.
   469  
   470  #### Symptom
   471  
   472  Once the user logs out all the containers exit.
   473  
   474  #### Solution
   475  
   476  ```console
   477  # loginctl enable-linger $UID
   478  ```
   479  
   480  ### 18) `podman run` fails with "bpf create: permission denied error"
   481  
   482  The Kernel Lockdown patches deny eBPF programs when Secure Boot is enabled in the BIOS. [Matthew Garrett's post](https://mjg59.dreamwidth.org/50577.html) describes the relationship between Lockdown and Secure Boot and [Jan-Philip Gehrcke's](https://gehrcke.de/2019/09/running-an-ebpf-program-may-require-lifting-the-kernel-lockdown/) connects this with eBPF. [RH bug 1768125](https://bugzilla.redhat.com/show_bug.cgi?id=1768125) contains some additional details.
   483  
   484  #### Symptom
   485  
   486  Attempts to run podman result in
   487  
   488  ```Error: bpf create : Operation not permitted: OCI runtime permission denied error```
   489  
   490  #### Solution
   491  
   492  One workaround is to disable Secure Boot in your BIOS.
   493  
   494  ### 19) error creating libpod runtime: there might not be enough IDs available in the namespace
   495  
   496  Unable to pull images
   497  
   498  #### Symptom
   499  
   500  ```console
   501  $ podman unshare cat /proc/self/uid_map
   502  	 0       1000          1
   503  ```
   504  
   505  #### Solution
   506  
   507  ```console
   508  $ podman system migrate
   509  ```
   510  
   511  Original command now returns
   512  
   513  ```console
   514  $ podman unshare cat /proc/self/uid_map
   515  	 0       1000          1
   516  	 1     100000      65536
   517  ```
   518  
   519  Reference [subuid](https://man7.org/linux/man-pages/man5/subuid.5.html) and [subgid](https://man7.org/linux/man-pages/man5/subgid.5.html) man pages for more detail.
   520  
   521  ### 20) Passed-in devices or files can't be accessed in rootless container
   522  
   523  As a non-root user you have group access rights to a device or files that you
   524  want to pass into a rootless container with `--device=...` or `--volume=...`
   525  
   526  #### Symptom
   527  
   528  Any access inside the container is rejected with "Permission denied".
   529  
   530  #### Solution
   531  
   532  The runtime uses `setgroups(2)` hence the process loses all additional groups
   533  the non-root user has. Use the `--group-add keep-groups` flag to pass the
   534  user's supplementary group access into the container. Currently only available
   535  with the `crun` OCI runtime.
   536  
   537  ### 21) A rootless container running in detached mode is closed at logout
   538  <!-- This is the same as section 17 above and should be deleted -->
   539  
   540  When running a container with a command like `podman run --detach httpd` as
   541  a rootless user, the container is closed upon logout and is not kept running.
   542  
   543  #### Symptom
   544  
   545  When logging out of a rootless user session, all containers that were started
   546  in detached mode are stopped and are not kept running.  As the root user, these
   547  same containers would survive the logout and continue running.
   548  
   549  #### Solution
   550  
   551  When systemd notes that a session that started a Podman container has exited,
   552  it will also stop any containers that have been associated with it.  To avoid
   553  this, use the following command before logging out: `loginctl enable-linger`.
   554  To later revert the linger functionality, use `loginctl disable-linger`.
   555  
   556  LOGINCTL(1), SYSTEMD(1)
   557  
   558  ### 22) Containers default detach keys conflict with shell history navigation
   559  
   560  Podman defaults to `ctrl-p,ctrl-q` to detach from a running containers. The
   561  bash and zsh shells default to `ctrl-p` for the displaying of the previous
   562  command.  This causes issues when running a shell inside of a container.
   563  
   564  #### Symptom
   565  
   566  With the default detach key combo ctrl-p,ctrl-q, shell history navigation
   567  (tested in bash and zsh) using ctrl-p to access the previous command will not
   568  display this previous command, or anything else.  Conmon is waiting for an
   569  additional character to see if the user wants to detach from the container.
   570  Adding additional characters to the command will cause it to be displayed along
   571  with the additional character. If the user types ctrl-p a second time the shell
   572  display the 2nd to last command.
   573  
   574  #### Solution
   575  
   576  The solution to this is to change the default detach_keys. For example in order
   577  to change the defaults to `ctrl-q,ctrl-q` use the `--detach-keys` option.
   578  
   579  ```console
   580  $ podman run -ti --detach-keys ctrl-q,ctrl-q fedora sh
   581  ```
   582  
   583  To make this change the default for all containers, users can modify the
   584  containers.conf file. This can be done simply in your home directory, but adding the
   585  following lines to users containers.conf
   586  
   587  ```console
   588  $ cat >> ~/.config/containers/containers.conf << _eof
   589  [engine]
   590  detach_keys="ctrl-q,ctrl-q"
   591  _eof
   592  ```
   593  
   594  In order to effect root running containers and all users, modify the system
   595  wide defaults in `/etc/containers/containers.conf`.
   596  
   597  
   598  ### 23) Container with exposed ports won't run in a pod
   599  
   600  A container with ports that have been published with the `--publish` or `-p` option
   601  can not be run within a pod.
   602  
   603  #### Symptom
   604  
   605  ```console
   606  $ podman pod create --name srcview -p 127.0.0.1:3434:3434 -p 127.0.0.1:7080:7080 -p 127.0.0.1:3370:3370                        4b2f4611fa2cbd60b3899b936368c2b3f4f0f68bc8e6593416e0ab8ecb0a3f1d
   607  
   608  $ podman run --pod srcview --name src-expose -p 3434:3434 -v "${PWD}:/var/opt/localrepo":Z,ro sourcegraph/src-expose:latest serve /var/opt/localrepo
   609  Error: cannot set port bindings on an existing container network namespace
   610  ```
   611  
   612  #### Solution
   613  
   614  This is a known limitation.  If a container will be run within a pod, it is not necessary
   615  to publish the port for the containers in the pod. The port must only be published by the
   616  pod itself.  Pod network stacks act like the network stack on the host - you have a
   617  variety of containers in the pod, and programs in the container, all sharing a single
   618  interface and IP address, and associated ports. If one container binds to a port, no other
   619  container can use that port within the pod while it is in use. Containers in the pod can
   620  also communicate over localhost by having one container bind to localhost in the pod, and
   621  another connect to that port.
   622  
   623  In the example from the symptom section, dropping the `-p 3434:3434` would allow the
   624  `podman run` command to complete, and the container as part of the pod would still have
   625  access to that port.  For example:
   626  
   627  ```console
   628  $ podman run --pod srcview --name src-expose -v "${PWD}:/var/opt/localrepo":Z,ro sourcegraph/src-expose:latest serve /var/opt/localrepo
   629  ```
   630  
   631  ### 24) Podman container images fail with `fuse: device not found` when run
   632  
   633  Some container images require that the fuse kernel module is loaded in the kernel
   634  before they will run with the fuse filesystem in play.
   635  
   636  #### Symptom
   637  
   638  When trying to run the container images found at quay.io/podman, quay.io/containers
   639  registry.access.redhat.com/ubi8 or other locations, an error will sometimes be returned:
   640  
   641  <!-- this would be better if it showed the command being run, and use ```console markup -->
   642  ```
   643  ERRO error unmounting /var/lib/containers/storage/overlay/30c058cdadc888177361dd14a7ed7edab441c58525b341df321f07bc11440e68/merged: invalid argument
   644  error mounting container "1ae176ca72b3da7c70af31db7434bcf6f94b07dbc0328bc7e4e8fc9579d0dc2e": error mounting build container "1ae176ca72b3da7c70af31db7434bcf6f94b07dbc0328bc7e4e8fc9579d0dc2e": error creating overlay mount to /var/lib/containers/storage/overlay/30c058cdadc888177361dd14a7ed7edab441c58525b341df321f07bc11440e68/merged: using mount program /usr/bin/fuse-overlayfs: fuse: device not found, try 'modprobe fuse' first
   645  fuse-overlayfs: cannot mount: No such device
   646  : exit status 1
   647  ERRO exit status 1
   648  ```
   649  
   650  #### Solution
   651  
   652  If you encounter a `fuse: device not found` error when running the container image, it is likely that
   653  the fuse kernel module has not been loaded on your host system.  Use the command `modprobe fuse` to load the
   654  module and then run the container image afterwards.  To enable this automatically at boot time, you can add a configuration
   655  file to `/etc/modules.load.d`.  See `man modules-load.d` for more details.
   656  
   657  ### 25) podman run --rootfs link/to//read/only/dir does not work
   658  
   659  An error such as "OCI runtime error" on a read-only filesystem or the error "{image} is not an absolute path or is a symlink" are often times indicators for this issue.  For more details, review this [issue](
   660  https://github.com/containers/podman/issues/5895).
   661  
   662  #### Symptom
   663  
   664  Rootless Podman requires certain files to exist in a file system in order to run.
   665  Podman will create /etc/resolv.conf, /etc/hosts and other file descriptors on the rootfs in order
   666  to mount volumes on them.
   667  
   668  #### Solution
   669  
   670  Run the container once in read/write mode, Podman will generate all of the FDs on the rootfs, and
   671  from that point forward you can run with a read-only rootfs.
   672  
   673  ```console
   674  $ podman run --rm --rootfs /path/to/rootfs true
   675  ```
   676  
   677  The command above will create all the missing directories needed to run the container.
   678  
   679  After that, it can be used in read-only mode, by multiple containers at the same time:
   680  
   681  ```console
   682  $ podman run --read-only --rootfs /path/to/rootfs ....
   683  ```
   684  
   685  Another option is to use an Overlay Rootfs Mount:
   686  
   687  ```console
   688  $ podman run --rootfs /path/to/rootfs:O ....
   689  ```
   690  
   691  Modifications to the mount point are destroyed when the container
   692  finishes executing, similar to a tmpfs mount point being unmounted.
   693  
   694  ### 26) Running containers with resource limits fails with a permissions error
   695  
   696  On some systemd-based systems, non-root users do not have resource limit delegation
   697  permissions. This causes setting resource limits to fail.
   698  
   699  #### Symptom
   700  
   701  Running a container with a resource limit options will fail with an error similar to the following:
   702  
   703  `--cpus`, `--cpu-period`, `--cpu-quota`, `--cpu-shares`:
   704  
   705      Error: OCI runtime error: crun: the requested cgroup controller `cpu` is not available
   706  
   707  `--cpuset-cpus`, `--cpuset-mems`:
   708  
   709      Error: OCI runtime error: crun: the requested cgroup controller `cpuset` is not available
   710  
   711  This means that resource limit delegation is not enabled for the current user.
   712  
   713  #### Solution
   714  
   715  You can verify whether resource limit delegation is enabled by running the following command:
   716  
   717  ```console
   718  $ cat "/sys/fs/cgroup/user.slice/user-$(id -u).slice/user@$(id -u).service/cgroup.controllers"
   719  ```
   720  
   721  Example output might be:
   722  
   723      memory pids
   724  
   725  In the above example, `cpu` and `cpuset` are not listed, which means the current user does
   726  not have permission to set CPU or CPUSET limits.
   727  
   728  If you want to enable CPU or CPUSET limit delegation for all users, you can create the
   729  file `/etc/systemd/system/user@.service.d/delegate.conf` with the contents:
   730  
   731  ```ini
   732  [Service]
   733  Delegate=memory pids cpu cpuset
   734  ```
   735  
   736  After logging out and logging back in, you should have permission to set
   737  CPU and CPUSET limits.
   738  
   739  ### 27) `exec container process '/bin/sh': Exec format error` (or another binary than `bin/sh`)
   740  
   741  This can happen when running a container from an image for another architecture than the one you are running on.
   742  
   743  For example, if a remote repository only has, and thus send you, a `linux/arm64` _OS/ARCH_ but you run on `linux/amd64` (as happened in https://github.com/openMF/community-app/issues/3323 due to https://github.com/timbru31/docker-ruby-node/issues/564).
   744  
   745  ### 28) `Error: failed to create sshClient: Connection to bastion host (ssh://user@host:22/run/user/.../podman/podman.sock) failed.: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain`
   746  
   747  In some situations where the client is not on the same machine as where the podman daemon is running the client key could be using a cipher not supported by the host. This indicates an issue with one's SSH config. Until remedied using podman over ssh
   748  with a pre-shared key will be impossible.
   749  
   750  #### Symptom
   751  
   752  The accepted ciphers per `/etc/crypto-policies/back-ends/openssh.config` are not one that was used to create the public/private key pair that was transferred over to the host for ssh authentication.
   753  
   754  You can confirm this is the case by attempting to connect to the host via `podman-remote info` from the client and simultaneously on the host running `journalctl -f` and watching for the error `userauth_pubkey: key type ssh-rsa not in PubkeyAcceptedAlgorithms [preauth]`.
   755  
   756  #### Solution
   757  
   758  Create a new key using a supported algorithm e.g. ecdsa:
   759  
   760  ```console
   761  $ ssh-keygen -t ecdsa -f ~/.ssh/podman
   762  ```
   763  
   764  Then copy the new id over:
   765  
   766  ```console
   767  $ ssh-copy-id -i ~/.ssh/podman.pub user@host
   768  ```
   769  
   770  And then re-add the connection (removing the old one if necessary):
   771  
   772  ```console
   773  $ podman-remote system connection add myuser --identity ~/.ssh/podman ssh://user@host/run/user/1000/podman/podman.sock
   774  ```
   775  
   776  And now this should work:
   777  
   778  ```console
   779  $ podman-remote info
   780  ```
   781  
   782  ### 29) Rootless CNI networking fails in RHEL with Podman v2.2.1 to v3.0.1.
   783  
   784  A failure is encountered when trying to use networking on a rootless
   785  container in Podman v2.2.1 through v3.0.1 on RHEL.  This error does not
   786  occur on other Linux distributions.
   787  
   788  #### Symptom
   789  
   790  A rootless container is created using a CNI network, but the `podman run` command
   791  returns an error that an image must be built.
   792  
   793  #### Solution
   794  
   795  In order to use a CNI network in a rootless container on RHEL,
   796  an Infra container image for CNI-in-slirp4netns must be created.  The
   797  instructions for building the Infra container image can be found for
   798  v2.2.1 [here](https://github.com/containers/podman/tree/v2.2.1-rhel/contrib/rootless-cni-infra),
   799  and for v3.0.1 [here](https://github.com/containers/podman/tree/v3.0.1-rhel/contrib/rootless-cni-infra).
   800  
   801  ### 30) Container related firewall rules are lost after reloading firewalld
   802  Container network can't be reached after `firewall-cmd --reload` and `systemctl restart firewalld` Running `podman network reload` will fix it but it has to be done manually.
   803  
   804  #### Symptom
   805  The firewall rules created by podman are lost when the firewall is reloaded.
   806  
   807  #### Solution
   808  [@ranjithrajaram](https://github.com/containers/podman/issues/5431#issuecomment-847758377) has created a systemd-hook to fix this issue
   809  
   810  1) For "firewall-cmd --reload", create a systemd unit file with the following
   811  ```ini
   812  [Unit]
   813  Description=firewalld reload hook - run a hook script on firewalld reload
   814  Wants=dbus.service
   815  After=dbus.service
   816  
   817  [Service]
   818  Type=simple
   819  ExecStart=/bin/bash -c '/bin/busctl monitor --system --match "interface=org.fedoraproject.FirewallD1,member=Reloaded" --match "interface=org.fedoraproject.FirewallD1,member=PropertiesChanged" | while read -r line ; do podman network reload --all ; done'
   820  
   821  [Install]
   822  WantedBy=default.target
   823  ```
   824  
   825  2) For "systemctl restart firewalld", create a systemd unit file with the following
   826  ```ini
   827  [Unit]
   828  Description=podman network reload
   829  Wants=firewalld.service
   830  After=firewalld.service
   831  PartOf=firewalld.service
   832  
   833  [Service]
   834  Type=simple
   835  RemainAfterExit=yes
   836  ExecStart=/usr/bin/podman network reload --all
   837  
   838  [Install]
   839  WantedBy=default.target
   840  ```
   841  
   842  However, If you use busctl monitor then you can't get machine-readable output on RHEL 8.
   843  Since it doesn't have `busctl -j` as mentioned here by [@yrro](https://github.com/containers/podman/issues/5431#issuecomment-896943018).
   844  
   845  For RHEL 8, you can use the following one-liner bash script.
   846  ```ini
   847  [Unit]
   848  Description=Redo podman NAT rules after firewalld starts or reloads
   849  Wants=dbus.service
   850  After=dbus.service
   851  Requires=firewalld.service
   852  
   853  [Service]
   854  Type=simple
   855  ExecStart=/bin/bash -c "dbus-monitor --profile --system 'type=signal,sender=org.freedesktop.DBus,path=/org/freedesktop/DBus,interface=org.freedesktop.DBus,member=NameAcquired,arg0=org.fedoraproject.FirewallD1' 'type=signal,path=/org/fedoraproject/FirewallD1,interface=org.fedoraproject.FirewallD1,member=Reloaded' | sed -u '/^#/d' | while read -r type timestamp serial sender destination path interface member _junk; do if [[ $type = '#'* ]]; then continue; elif [[ $interface = org.freedesktop.DBus && $member = NameAcquired ]]; then echo 'firewalld started'; podman network reload --all; elif [[ $interface = org.fedoraproject.FirewallD1 && $member = Reloaded ]]; then echo 'firewalld reloaded'; podman network reload --all; fi; done"
   856  Restart=Always
   857  
   858  [Install]
   859  WantedBy=default.target
   860  ```
   861  `busctl-monitor` is almost usable in RHEL 8, except that it always outputs two bogus events when it starts up,
   862  one of which is (in its only machine-readable format) indistinguishable from the `NameOwnerChanged` that you get when firewalld starts up.
   863  This means you would get an extra `podman network reload --all` when this unit starts.
   864  
   865  Apart from this, you can use the following systemd service with the python3 code.
   866  
   867  ```ini
   868  [Unit]
   869  Description=Redo podman NAT rules after firewalld starts or reloads
   870  Wants=dbus.service
   871  Requires=firewalld.service
   872  After=dbus.service
   873  
   874  [Service]
   875  Type=simple
   876  ExecStart=/usr/bin/python  /path/to/python/code/podman-redo-nat.py
   877  Restart=always
   878  
   879  [Install]
   880  WantedBy=default.target
   881  ```
   882  The code reloads podman network twice when you use `systemctl restart firewalld`.
   883  ```python3
   884  import dbus
   885  from gi.repository import GLib
   886  from dbus.mainloop.glib import DBusGMainLoop
   887  import subprocess
   888  import sys
   889  
   890  # I'm a bit confused on the return values in the code
   891  # Not sure if they are needed.
   892  
   893  def reload_podman_network():
   894      try:
   895          subprocess.run(["podman","network","reload","--all"],timeout=90)
   896          # I'm not sure about this part
   897          sys.stdout.write("podman network reload done\n")
   898          sys.stdout.flush()
   899      except subprocess.TimeoutExpired as t:
   900          sys.stderr.write(f"Podman reload failed due to Timeout {t}")
   901      except subprocess.CalledProcessError as e:
   902          sys.stderr.write(f"Podman reload failed due to {e}")
   903      except Exception as e:
   904          sys.stderr.write(f"Podman reload failed with an Unhandled Exception {e}")
   905  
   906      return False
   907  
   908  def signal_handler(*args, **kwargs):
   909      if kwargs.get('member') == "Reloaded":
   910          reload_podman_network()
   911      elif kwargs.get('member') == "NameOwnerChanged":
   912          reload_podman_network()
   913      else:
   914          return None
   915      return None
   916  
   917  def signal_listener():
   918      try:
   919          DBusGMainLoop(set_as_default=True)# Define the loop.
   920          loop = GLib.MainLoop()
   921          system_bus = dbus.SystemBus()
   922          # Listens to systemctl restart firewalld with a filter added, will cause podman network to be reloaded twice
   923          system_bus.add_signal_receiver(signal_handler,dbus_interface='org.freedesktop.DBus',arg0='org.fedoraproject.FirewallD1',member_keyword='member')
   924          # Listens to firewall-cmd --reload
   925          system_bus.add_signal_receiver(signal_handler,dbus_interface='org.fedoraproject.FirewallD1',signal_name='Reloaded',member_keyword='member')
   926          loop.run()
   927      except KeyboardInterrupt:
   928          loop.quit()
   929          sys.exit(0)
   930      except Exception as e:
   931          loop.quit()
   932          sys.stderr.write(f"Error occurred {e}")
   933          sys.exit(1)
   934  
   935  if __name__ == "__main__":
   936      signal_listener()
   937  ```
   938  
   939  ### 31) Podman run fails with `ERRO[0000] XDG_RUNTIME_DIR directory "/run/user/0" is not owned by the current user` or `Error: creating tmpdir: mkdir /run/user/1000: permission denied`.
   940  
   941  A failure is encountered when performing `podman run` with a warning `XDG_RUNTIME_DIR is pointing to a path which is not writable. Most likely podman will fail.`
   942  
   943  #### Symptom
   944  
   945  A rootless container is being invoked with cgroup configuration as `cgroupv2` for user with missing or invalid **systemd session**.
   946  
   947  Example cases
   948  ```console
   949  # su user1 -c 'podman images'
   950  ERRO[0000] XDG_RUNTIME_DIR directory "/run/user/0" is not owned by the current user
   951  ```
   952  ```console
   953  # su - user1 -c 'podman images'
   954  Error: creating tmpdir: mkdir /run/user/1000: permission denied
   955  ```
   956  
   957  #### Solution
   958  
   959  Podman expects a valid login session for the `rootless+cgroupv2` use-case. Podman execution is expected to fail if the login session is not present. In most cases, podman will figure out a solution on its own but if `XDG_RUNTIME_DIR` is pointing to a path that is not writable execution will most likely fail. Typical scenarios of such cases are seen when users are trying to use Podman with `su - <user> -c '<podman-command>'`, or `sudo -l` and badly configured systemd session.
   960  
   961  Alternatives:
   962  
   963  * Execute Podman via __systemd-run__ that will first start a systemd login session:
   964  
   965    ```console
   966    $ sudo systemd-run --machine=username@ --quiet --user --collect --pipe --wait podman run --rm docker.io/library/alpine echo hello
   967    ```
   968  * Start an interactive shell in a systemd login session with the command `machinectl shell <username>@`
   969    and then run Podman
   970  
   971    ```console
   972    $ sudo -i
   973    # machinectl shell username@
   974    Connected to the local host. Press ^] three times within 1s to exit session.
   975    $ podman run --rm docker.io/library/alpine echo hello
   976    ```
   977  * Start a new systemd login session by logging in with `ssh` i.e. `ssh <username>@localhost` and then run Podman.
   978  
   979  * Before invoking Podman command create a valid login session for your rootless user using `loginctl enable-linger <username>`
   980  
   981  ### 32) 127.0.0.1:7777 port already bound
   982  
   983  After deleting a VM on macOS, the initialization of subsequent VMs fails.
   984  
   985  #### Symptom
   986  
   987  After deleting a client VM on macOS via `podman machine stop` && `podman machine rm`, attempting to `podman machine init` a new client VM leads to an error with the 127.0.0.1:7777 port already bound.
   988  
   989  #### Solution
   990  
   991  You will need to remove the hanging gv-proxy process bound to the port in question. For example, if the port mentioned in the error message is 127.0.0.1:7777, you can use the command `kill -9 $(lsof -i:7777)` in order to identify and remove the hanging process which prevents you from starting a new VM on that default port.
   992  
   993  ### 33) The sshd process fails to run inside of the container.
   994  
   995  #### Symptom
   996  
   997  The sshd process running inside the container fails with the error
   998  "Error writing /proc/self/loginuid".
   999  
  1000  ### Solution
  1001  
  1002  If the `/proc/self/loginuid` file is already initialized then the
  1003  `CAP_AUDIT_CONTROL` capability is required to override it.
  1004  
  1005  This happens when running Podman from a user session since the
  1006  `/proc/self/loginuid` file is already initialized.  The solution is to
  1007  run Podman from a system service, either using the Podman service, and
  1008  then using podman -remote to start the container or simply by running
  1009  something like `systemd-run podman run ...`.  In this case the
  1010  container will only need `CAP_AUDIT_WRITE`.
  1011  
  1012  ### 34) Container creates a file that is not owned by the user's regular UID
  1013  
  1014  After running a container with rootless Podman, the non-root user sees a numerical UID and GID instead of a username and groupname.
  1015  
  1016  #### Symptom
  1017  
  1018  When listing file permissions with `ls -l` on the host in a directory that was passed as `--volume /some/dir` to `podman run`,
  1019  the UID and GID are displayed rather than the corresponding username and groupname. The UID and GID numbers displayed are
  1020  from the user's subordinate UID and GID ranges on the host system.
  1021  
  1022  An example
  1023  
  1024  ```console
  1025  $ mkdir dir1
  1026  $ chmod 777 dir1
  1027  $ podman run --rm -v ./dir1:/dir1:Z \
  1028               --user 2003:2003 \
  1029               docker.io/library/ubuntu bash -c "touch /dir1/a; chmod 600 /dir1/a"
  1030  $ ls -l dir1/a
  1031  -rw-------. 1 102002 102002 0 Jan 19 19:35 dir1/a
  1032  $ less dir1/a
  1033  less: dir1/a: Permission denied
  1034  ```
  1035  
  1036  #### Solution
  1037  
  1038  If you want to read, chown, or remove such a file, enter a user
  1039  namespace. Instead of running commands such as `less dir1/a` or `rm dir1/a`, you
  1040  need to prepend the command-line with `podman unshare`, i.e.,
  1041  `podman unshare less dir1/a` or `podman unshare rm dir1/a`. To change the ownership
  1042  of the file `dir1/a` to your regular user's UID and GID, run `podman unshare chown 0:0 dir1/a`.
  1043  A file having the ownership `0:0` in the user namespace is owned by the regular
  1044  user on the host. To use Bash features, such as variable expansion and
  1045  globbing, you need to wrap the command with `bash -c`, e.g.
  1046  `podman unshare bash -c 'ls $HOME/dir1/a*'`.
  1047  
  1048  Would it have been possible to run Podman in another way so that your regular
  1049  user would have become the owner of the file? Yes, you can use the option
  1050  `--userns keep-id:uid=$uid,gid=$gid` to change how UIDs and GIDs are mapped
  1051  between the container and the host. Let's try it out.
  1052  
  1053  In the example above `ls -l` shows the UID 102002 and GID 102002. Set shell variables
  1054  
  1055  ```console
  1056  $ uid_from_ls=102002
  1057  $ gid_from_ls=102002
  1058  ```
  1059  
  1060  Set shell variables to the lowest subordinate UID and GID
  1061  
  1062  ```console
  1063  $ lowest_subuid=$(podman info --format "{{ (index .Host.IDMappings.UIDMap 1).HostID }}")
  1064  $ lowest_subgid=$(podman info --format "{{ (index .Host.IDMappings.GIDMap 1).HostID }}")
  1065  ```
  1066  
  1067  Compute the UID and GID inside the container that map to the owner of the created file on the host.
  1068  
  1069  ```console
  1070  $ uid=$(( $uid_from_ls - $lowest_subuid + 1))
  1071  $ gid=$(( $gid_from_ls - $lowest_subgid + 1))
  1072  ```
  1073  (In the computation it was assumed that there is only one subuid range and one subgid range)
  1074  
  1075  ```console
  1076  $ echo $uid
  1077  2003
  1078  $ echo $gid
  1079  2003
  1080  ```
  1081  
  1082  The computation shows that the UID is `2003` and the GID is `2003` inside the container.
  1083  This comes as no surprise as this is what was specified before with `--user=2003:2003`,
  1084  but the same computation could be used whenever a username is specified
  1085  or the `--user` option is not used.
  1086  
  1087  Run the container again but now with UIDs and GIDs mapped
  1088  
  1089  ```console
  1090  $ mkdir dir1
  1091  $ chmod 777 dir1
  1092  $ podman run --rm
  1093    -v ./dir1:/dir1:Z \
  1094    --user $uid:$gid \
  1095    --userns keep-id:uid=$uid,gid=$gid \
  1096       docker.io/library/ubuntu bash -c "touch /dir1/a; chmod 600 /dir1/a"
  1097  $ id -u
  1098  tester
  1099  $ id -g
  1100  tester
  1101  $ ls -l dir1/a
  1102  -rw-------. 1 tester tester 0 Jan 19 20:31 dir1/a
  1103  $
  1104  ```
  1105  
  1106  In this example the `--user` option specified a rootless user in the container.
  1107  As the rootless user could also have been specified in the container image, e.g.
  1108  
  1109  ```console
  1110  $ podman image inspect --format "user: {{.User}}" IMAGE
  1111  user: hpc
  1112  ```
  1113  the same problem could also occur even without specifying `--user`.
  1114  
  1115  Another variant of the same problem could occur when using
  1116  `--user=root:root` (the default), but where the root user creates non-root owned files
  1117  in some way (e.g by creating them themselves, or switching the effective UID to
  1118  a rootless user and then creates files).
  1119  
  1120  See also the troubleshooting tip:
  1121  
  1122  [_Podman run fails with "Error: unrecognized namespace mode keep-id:uid=1000,gid=1000 passed"_](#39-podman-run-fails-with-error-unrecognized-namespace-mode-keep-iduid1000gid1000-passed)
  1123  
  1124  ### 35) Passed-in devices or files can't be accessed in rootless container (UID/GID mapping problem)
  1125  
  1126  As a non-root user you have access rights to devices, files and directories that you
  1127  want to pass into a rootless container with `--device=...`, `--volume=...` or `--mount=..`.
  1128  
  1129  Podman by default maps a non-root user inside a container to one of the user's
  1130  subordinate UIDs and subordinate GIDs on the host. When the container user tries to access a
  1131  file, a "Permission denied" error could occur because the container user does not have the
  1132  permissions of the regular user of the host.
  1133  
  1134  #### Symptom
  1135  
  1136  * Any access inside the container is rejected with "Permission denied"
  1137  for files, directories or devices passed in to the container
  1138  with `--device=..`,`--volume=..` or `--mount=..`, e.g.
  1139  
  1140  ```console
  1141  $ mkdir dir1
  1142  $ chmod 700 dir1
  1143  $ podman run --rm -v ./dir1:/dir1:Z \
  1144               --user 2003:2003 \
  1145               docker.io/library/ubuntu ls /dir1
  1146  ls: cannot open directory '/dir1': Permission denied
  1147  ```
  1148  
  1149  #### Solution
  1150  
  1151  We follow essentially the same solution as in the previous
  1152  troubleshooting tip:
  1153  
  1154  [_Container creates a file that is not owned by the user's regular UID_](#34-container-creates-a-file-that-is-not-owned-by-the-users-regular-uid)
  1155  
  1156  but for this problem the container UID and GID can't be as
  1157  easily computed by mere addition and subtraction.
  1158  
  1159  In other words, it might be more challenging to find out the UID and
  1160  the GID inside the container that we want to map to the regular
  1161  user on the host.
  1162  
  1163  If the `--user` option is used together with a numerical UID and GID
  1164  to specify a rootless user, we already know the answer.
  1165  
  1166  If the `--user` option is used together with a username and groupname,
  1167  we could look up the UID and GID in the file `/etc/passwd` of the container.
  1168  
  1169  If the container user is not set via `--user` but instead from the
  1170  container image, we could inspect the container image
  1171  
  1172  ```console
  1173  $ podman image inspect --format "user: {{.User}}" IMAGE
  1174  user: hpc
  1175  ```
  1176  
  1177  and then look it up in `/etc/passwd` of the container.
  1178  
  1179  If the problem occurs in a container that is started to run as root but later
  1180  switches to an effictive UID of a rootless user, it might be less
  1181  straightforward to find out the UID and the GID. Reading the
  1182  `Containerfile`, `Dockerfile` or the `/etc/passwd` could give a clue.
  1183  
  1184  To run the container with the rootless container UID and GID mapped to the
  1185  user's regular UID and GID on the host follow these steps:
  1186  
  1187  Set the `uid` and `gid` shell variables in a Bash shell to the UID and GID
  1188  of the user that will be running inside the container, e.g.
  1189  
  1190  ```console
  1191  $ uid=2003
  1192  $ gid=2003
  1193  ```
  1194  
  1195  and run
  1196  
  1197  ```console
  1198  $ mkdir dir1
  1199  $ echo hello > dir1/file.txt
  1200  $ chmod 700 dir1/file.txt
  1201  $ podman run --rm \
  1202    -v ./dir1:/dir1:Z \
  1203    --user $uid:$gid \
  1204    --userns keep-id:uid=$uid,gid=$gid \
  1205    docker.io/library/alpine cat /dir1/file.txt
  1206  hello
  1207  ```
  1208  
  1209  See also the troubleshooting tip:
  1210  
  1211  [_Podman run fails with "Error: unrecognized namespace mode keep-id:uid=1000,gid=1000 passed"_](#39-podman-run-fails-with-error-unrecognized-namespace-mode-keep-iduid1000gid1000-passed)
  1212  
  1213  ### 36) Images in the additional stores can be deleted even if there are containers using them
  1214  
  1215  When an image in an additional store is used, it is not locked thus it
  1216  can be deleted even if there are containers using it.
  1217  
  1218  #### Symptom
  1219  
  1220  WARN[0000] Can't stat lower layer "/var/lib/containers/storage/overlay/l/7HS76F2P5N73FDUKUQAOJA3WI5" because it does not exist. Going through storage to recreate the missing symlinks.
  1221  
  1222  #### Solution
  1223  
  1224  It is the user responsibility to make sure images in an additional
  1225  store are not deleted while being used by containers in another
  1226  store.
  1227  
  1228  ### 37) Syncing bugfixes for podman-remote or setups using Podman API
  1229  
  1230  After upgrading Podman to a newer version an issue with the earlier version of Podman still presents itself while using podman-remote.
  1231  
  1232  #### Symptom
  1233  
  1234  While running podman remote commands with the most updated Podman, issues that were fixed in a prior version of Podman can arise either on the Podman client side or the Podman server side.
  1235  
  1236  #### Solution
  1237  
  1238  When upgrading Podman to a particular version for the required fixes, users often make the mistake of only upgrading the Podman client. However, suppose a setup uses `podman-remote` or uses a client that communicates with the Podman server on a remote machine via the REST API. In that case, it is required to upgrade both the Podman client and the Podman server running on the remote machine. Both the Podman client and server must be upgraded to the same version.
  1239  
  1240  Example: If a particular bug was fixed in `v4.1.0` then the Podman client must have version `v4.1.0` as well the Podman server must have version `v4.1.0`.
  1241  
  1242  ### 38) Unexpected carriage returns are outputted on the terminal
  1243  
  1244  When using the __--tty__ (__-t__) flag, unexpected carriage returns are outputted on the terminal.
  1245  
  1246  #### Symptom
  1247  
  1248  The container program prints a newline (`\n`) but the terminal outputs a carriage return and a newline (`\r\n`).
  1249  
  1250  ```
  1251  $ podman run --rm -t fedora echo abc | od -c
  1252  0000000   a   b   c  \r  \n
  1253  0000005
  1254  ```
  1255  
  1256  When run directly on the host, the result is as expected.
  1257  
  1258  ```
  1259  $ echo abc | od -c
  1260  0000000   a   b   c  \n
  1261  0000004
  1262  ```
  1263  
  1264  Extra carriage returns can also shift the prompt to the right.
  1265  
  1266  ```
  1267  $ podman run --rm -t fedora sh -c "echo 1; echo 2; echo 3" | cat -A
  1268  1^M$
  1269      2^M$
  1270          3^M$
  1271              $
  1272  ```
  1273  
  1274  #### Solution
  1275  
  1276  Run Podman without the __--tty__ (__-t__) flag.
  1277  
  1278  ```
  1279  $ podman run --rm fedora echo abc | od -c
  1280  0000000   a   b   c  \n
  1281  0000004
  1282  ```
  1283  
  1284  The __--tty__ (__-t__) flag should only be used when the program requires user interaction in the termainal, for instance expecting
  1285  the user to type an answer to a question.
  1286  
  1287  Where does the extra carriage return `\r` come from?
  1288  
  1289  The extra `\r` is not outputted by Podman but by the terminal. In fact, a reconfiguration of the terminal can make the extra `\r` go away.
  1290  
  1291  ```
  1292  $ podman run --rm -t fedora /bin/sh -c "stty -onlcr && echo abc" | od -c
  1293  0000000   a   b   c  \n
  1294  0000004
  1295  ```
  1296  
  1297  ### 39) Podman run fails with "Error: unrecognized namespace mode keep-id:uid=1000,gid=1000 passed"
  1298  
  1299  Podman 4.3.0 introduced the options _uid_ and _gid_ that can be given to `--userns keep-id` which are not recognized by older versions of Podman.
  1300  
  1301  #### Symptom
  1302  
  1303  When using a Podman version older than 4.3.0, the options _uid_ and _gid_ are not recognized, and an "unrecognized namespace mode" error is raised.
  1304  
  1305  ```
  1306  $ uid=1000
  1307  $ gid=1000
  1308  $ podman run --rm
  1309    --user $uid:$gid \
  1310    --userns keep-id:uid=$uid,gid=$gid \
  1311       docker.io/library/ubuntu /bin/cat /proc/self/uid_map
  1312  Error: unrecognized namespace mode keep-id:uid=1000,gid=1000 passed
  1313  $
  1314  ```
  1315  
  1316  #### Solution
  1317  
  1318  Use __--uidmap__ and __--gidmap__ options to describe the same UID and GID mapping.
  1319  
  1320  Run
  1321  
  1322  ```
  1323  $ uid=1000
  1324  $ gid=1000
  1325  $ subuidSize=$(( $(podman info --format "{{ range \
  1326     .Host.IDMappings.UIDMap }}+{{.Size }}{{end }}" ) - 1 ))
  1327  $ subgidSize=$(( $(podman info --format "{{ range \
  1328     .Host.IDMappings.GIDMap }}+{{.Size }}{{end }}" ) - 1 ))
  1329  $ podman run --rm \
  1330    --user $uid:$gid \
  1331    --uidmap 0:1:$uid \
  1332    --uidmap $uid:0:1 \
  1333    --uidmap $(($uid+1)):$(($uid+1)):$(($subuidSize-$uid)) \
  1334    --gidmap 0:1:$gid \
  1335    --gidmap $gid:0:1 \
  1336    --gidmap $(($gid+1)):$(($gid+1)):$(($subgidSize-$gid)) \
  1337       docker.io/library/ubuntu /bin/cat /proc/self/uid_map
  1338           0          1       1000
  1339        1000          0          1
  1340        1001       1001      64536
  1341  ```
  1342  
  1343  which uses the same UID and GID mapping as when specifying
  1344  `--userns keep-id:uid=$uid,gid=$gid` with Podman 4.3.0 (or greater)
  1345  
  1346  ```
  1347  $ uid=1000
  1348  $ gid=1000
  1349  $ podman run --rm \
  1350    --user $uid:$gid \
  1351    --userns keep-id:uid=$uid,gid=$gid \
  1352       docker.io/library/ubuntu /bin/cat /proc/self/uid_map
  1353           0          1       1000
  1354        1000          0          1
  1355        1001       1001      64536
  1356  ```
  1357  
  1358  Replace `/bin/cat /proc/self/uid_map` with
  1359  `/bin/cat /proc/self/gid_map` to show the GID mapping.
  1360  
  1361  ### 40) Podman fails to find expected image with "error locating pulled image", "image not known"
  1362  
  1363  When trying to do a Podman command that pulls an image from local storage or a remote repository,
  1364  an error is raised saying "image not known" or "error locating pulled image".  Even though the image
  1365  had been verified before the Podman command was invoked.
  1366  
  1367  #### Symptom
  1368  
  1369  After verifying that an image is in place either locally or on a remote repository, a Podman command
  1370  referencing that image will fail in a manner like:
  1371  ```
  1372  # ls Containerfile
  1373  FROM registry.access.redhat.com/ubi8-minimal:latest
  1374  MAINTAINER Podman Community
  1375  USER root
  1376  
  1377  # podman build .
  1378  STEP 1/2: FROM registry.access.redhat.com/ubi8-minimal
  1379  Trying to pull registry.access.redhat.com/ubi8-minimal:latest...
  1380  Getting image source signatures
  1381  Checking if image destination supports signatures
  1382  Copying blob a6577091999b done
  1383  Copying config abb1ba1bce done
  1384  Writing manifest to image destination
  1385  Storing signatures
  1386  Error: error creating build container: error locating pulled image "registry.access.redhat.com/ubi8-minimal:latest" name in containers storage: registry.access.redhat.com/ubi8-minimal:latest: image not known
  1387  ```
  1388  
  1389  #### Solution
  1390  The general cause for this is a timing issue.  To make Podman commands as
  1391  efficient as possible, read and write locks are only established for critical
  1392  sections within the code.  When pulling an image from a repository, a copy of
  1393  that image is first written to local storage using a write lock.  This lock is
  1394  released before the image is then acquired/read.  If another process does a
  1395  harmful command such as `podman system prune --all` or `podman system reset`
  1396  or `podman rmi --all`, between the time the image is written and before the
  1397  first process can acquire it, this type of `image not known` error can arise.
  1398  
  1399  The maintainers of Podman have considered heavier-duty locks to close this
  1400  timing window. However, the slowdown that all Podman commands would encounter
  1401  was not considered worth the cost of completely closing this small timing window.
  1402  
  1403  ### 41) A podman build step with `--mount=type=secret` fails with "operation not permitted"
  1404  
  1405  Executing a step in a `Dockerfile`/`Containerfile` which mounts secrets using `--mount=type=secret` fails with "operation not permitted" when running on a host filesystem mounted with `nosuid` and when using the `runc` runtime.
  1406  
  1407  #### Symptom
  1408  
  1409  A `RUN` line in the `Dockerfile`/`Containerfile` contains a [secret mount](https://github.com/containers/common/blob/main/docs/Containerfile.5.md) such as `--mount=type=secret,id=MY_USER,target=/etc/dnf/vars/MY_USER`.
  1410  When running `podman build` the process fails with an error message like:
  1411  
  1412  ```
  1413  STEP 3/13: RUN --mount=type=secret,id=MY_USER,target=/etc/dnf/vars/MY_USER     --mount=type=secret,id=MY_USER,target=/etc/dnf/vars/MY_USER     ...: time="2023-06-13T18:04:59+02:00" level=error msg="runc create failed: unable to start container process: error during container init: error mounting \"/var/tmp/buildah2251989386/mnt/buildah-bind-target-11\" to rootfs at \"/etc/dnf/vars/MY_USER\": mount /var/tmp/buildah2251989386/mnt/buildah-bind-target-11:/etc/dnf/vars/MY_USER (via /proc/self/fd/7), flags: 0x1021: operation not permitted"
  1414  : exit status 1
  1415  ERRO[0002] did not get container create message from subprocess: EOF
  1416  ```
  1417  
  1418  #### Solution
  1419  
  1420  * Install `crun`, e.g. with `dnf install crun`.
  1421  * Use the `crun` runtime by passing `--runtime /usr/bin/crun` to `podman build`.
  1422  
  1423  See also [Buildah issue 4228](https://github.com/containers/buildah/issues/4228) for a full discussion of the problem.
  1424  
  1425  ### 42) podman-in-podman builds that are file I/0 intensive are very slow
  1426  
  1427  When using the `overlay` storage driver to do a nested `podman build` inside a running container, file I/O operations such as `COPY` of a large amount of data is very slow or can hang completely.
  1428  
  1429  #### Symptom
  1430  
  1431  Using the default `overlay` storage driver, a `COPY`, `ADD`, or an I/O intensive `RUN` line in a `Containerfile` that is run inside another container is very slow or hangs completely when running a `podman build` inside the running parent container.
  1432  
  1433  #### Solution
  1434  
  1435  This could be caused by the child container using `fuse-overlayfs` for writing to `/var/lib/containers/storage`. Writes can be slow with `fuse-overlayfs`. The solution is to use the native `overlay` filesystem by using a local directory on the host system as a volume to `/var/lib/containers/storage` like so: `podman run --privileged --rm -it -v ./nested_storage:/var/lib/containers/storage parent:latest`. Ensure that the base image of `parent:latest` in this example has no contents in `/var/lib/containers/storage` in the image itself for this to work. Once using the native volume, the nested container should not fall back to `fuse-overlayfs` to write files and the nested build will complete much faster.
  1436  
  1437  If you don't have access to the parent run process, such as in a CI environment, then the second option is to change the storage driver to `vfs` in the parent image by changing changing this line in your `storage.conf` file: `driver = "vfs"`. You may have to run `podman system reset` for this to take effect. You know it's changed when `podman info |grep graphDriverName` outputs `graphDriverName: vfs`. This method is slower performance than using the volume method above but is significantly faster than `fuse-overlayfs`