github.com/AbhinandanKurakure/podman/v3@v3.4.10/troubleshooting.md (about)

     1  ![PODMAN logo](logo/podman-logo-source.svg)
     2  
     3  # Troubleshooting
     4  
     5  ## A list of common issues and solutions for Podman
     6  
     7  ---
     8  ### 1) Variety of issues - Validate Version
     9  
    10  A large number of issues reported against Podman are often found to already be fixed
    11  in more current versions of the project.  Before reporting an issue, please verify the
    12  version you are running with `podman version` and compare it to the latest release
    13  documented on the top of Podman's [README.md](README.md).
    14  
    15  If they differ, please update your version of PODMAN to the latest possible
    16  and retry your command before reporting the issue.
    17  
    18  ---
    19  ### 2) Can't use volume mount, get permission denied
    20  
    21  $ podman run -v ~/mycontent:/content fedora touch /content/file
    22  touch: cannot touch '/content/file': Permission denied
    23  
    24  #### Solution
    25  
    26  This is sometimes caused by SELinux, and sometimes by user namespaces.
    27  
    28  Labeling systems like SELinux require that proper labels are placed on volume
    29  content mounted into a container. Without a label, the security system might
    30  prevent the processes running inside the container from using the content. By
    31  default, Podman does not change the labels set by the OS.
    32  
    33  To change a label in the container context, you can add either of two suffixes
    34  **:z** or **:Z** to the volume mount. These suffixes tell Podman to relabel file
    35  objects on the shared volumes. The **z** option tells Podman that two containers
    36  share the volume content. As a result, Podman labels the content with a shared
    37  content label. Shared volume labels allow all containers to read/write content.
    38  The **Z** option tells Podman to label the content with a private unshared label.
    39  Only the current container can use a private volume.
    40  
    41  $ podman run -v ~/mycontent:/content:Z fedora touch /content/file
    42  
    43  Make sure the content is private for the container.  Do not relabel system directories and content.
    44  Relabeling system content might cause other confined services on your machine to fail.  For these
    45  types of containers we recommend that disable SELinux separation.  The option `--security-opt label=disable`
    46  will disable SELinux separation for the container.
    47  
    48  $ podman run --security-opt label=disable -v ~:/home/user fedora touch /home/user/file
    49  
    50  In cases where the container image runs as a specific, non-root user, though, the
    51  solution is to fix the user namespace.  This would include container images such as
    52  the Jupyter Notebook image (which runs as "jovyan") and the Postgres image (which runs
    53  as "postgres").  In either case, use the `--userns` switch to map user namespaces,
    54  most of the time by using the **keep-id** option.
    55  
    56  $ podman run -v "$PWD":/home/jovyan/work --userns=keep-id jupyter/scipy-notebook
    57  
    58  ---
    59  ### 3) No such image or Bare keys cannot contain ':'
    60  
    61  When doing a `podman pull` or `podman build` command and a "common" image cannot be pulled,
    62  it is likely that the `/etc/containers/registries.conf` file is either not installed or possibly
    63  misconfigured.
    64  
    65  #### Symptom
    66  
    67  ```console
    68  $ sudo podman build -f Dockerfile
    69  STEP 1: FROM alpine
    70  error building: error creating build container: no such image "alpine" in registry: image not known
    71  ```
    72  
    73  or
    74  
    75  ```console
    76  $ sudo podman pull fedora
    77  error pulling image "fedora": unable to pull fedora: error getting default registries to try: Near line 9 (last key parsed ''): Bare keys cannot contain ':'.
    78  ```
    79  
    80  #### Solution
    81  
    82    * Verify that the `/etc/containers/registries.conf` file exists.  If not, verify that the containers-common package is installed.
    83    * Verify that the entries in the `unqualified-search-registries` list of the `/etc/containers/registries.conf` file are valid and reachable.
    84      * i.e. `unqualified-search-registries = ["registry.fedoraproject.org", "quay.io", "registry.access.redhat.com"]`
    85  
    86  ---
    87  ### 4) http: server gave HTTP response to HTTPS client
    88  
    89  When doing a Podman command such as `build`, `commit`, `pull`, or `push` to a registry,
    90  tls verification is turned on by default.  If authentication is not used with
    91  those commands, this error can occur.
    92  
    93  #### Symptom
    94  
    95  ```console
    96  $ sudo podman push alpine docker://localhost:5000/myalpine:latest
    97  Getting image source signatures
    98  Get https://localhost:5000/v2/: http: server gave HTTP response to HTTPS client
    99  ```
   100  
   101  #### Solution
   102  
   103  By default tls verification is turned on when communicating to registries from
   104  Podman.  If the registry does not require authentication the Podman commands
   105  such as `build`, `commit`, `pull` and `push` will fail unless tls verification is turned
   106  off using the `--tls-verify` option.  **NOTE:** It is not at all recommended to
   107  communicate with a registry and not use tls verification.
   108  
   109    * Turn off tls verification by passing false to the tls-verification option.
   110    * I.e. `podman push --tls-verify=false alpine docker://localhost:5000/myalpine:latest`
   111  
   112  ---
   113  ### 5) rootless containers cannot ping hosts
   114  
   115  When using the ping command from a non-root container, the command may
   116  fail because of a lack of privileges.
   117  
   118  #### Symptom
   119  
   120  ```console
   121  $ podman run --rm fedora ping -W10 -c1 redhat.com
   122  PING redhat.com (209.132.183.105): 56 data bytes
   123  
   124  --- redhat.com ping statistics ---
   125  1 packets transmitted, 0 packets received, 100% packet loss
   126  ```
   127  
   128  #### Solution
   129  
   130  It is most likely necessary to enable unprivileged pings on the host.
   131  Be sure the UID of the user is part of the range in the
   132  `/proc/sys/net/ipv4/ping_group_range` file.
   133  
   134  To change its value you can use something like: `sysctl -w
   135  "net.ipv4.ping_group_range=0 2000000"`.
   136  
   137  To make the change persistent, you'll need to add a file in
   138  `/etc/sysctl.d` that contains `net.ipv4.ping_group_range=0 $MAX_UID`.
   139  
   140  ---
   141  ### 6) Build hangs when the Dockerfile contains the useradd command
   142  
   143  When the Dockerfile contains a command like `RUN useradd -u 99999000 -g users newuser` the build can hang.
   144  
   145  #### Symptom
   146  
   147  If you are using a useradd command within a Dockerfile with a large UID/GID, it will create a large sparse file `/var/log/lastlog`.  This can cause the build to hang forever.  Go language does not support sparse files correctly, which can lead to some huge files being created in your container image.
   148  
   149  #### Solution
   150  
   151  If the entry in the Dockerfile looked like: RUN useradd -u 99999000 -g users newuser then add the `--no-log-init` parameter to change it to: `RUN useradd --no-log-init -u 99999000 -g users newuser`. This option tells useradd to stop creating the lastlog file.
   152  
   153  ### 7) Permission denied when running Podman commands
   154  
   155  When rootless Podman attempts to execute a container on a non exec home directory a permission error will be raised.
   156  
   157  #### Symptom
   158  
   159  If you are running Podman or Buildah on a home directory that is mounted noexec,
   160  then they will fail. With a message like:
   161  
   162  ```
   163  podman run centos:7
   164  standard_init_linux.go:203: exec user process caused "permission denied"
   165  ```
   166  
   167  #### Solution
   168  
   169  Since the administrator of the system setup your home directory to be noexec, you will not be allowed to execute containers from storage in your home directory. It is possible to work around this by manually specifying a container storage path that is not on a noexec mount. Simply copy the file /etc/containers/storage.conf to ~/.config/containers/ (creating the directory if necessary). Specify a graphroot directory which is not on a noexec mount point and to which you have read/write privileges.  You will need to modify other fields to writable directories as well.
   170  
   171  For example
   172  
   173  ```
   174  cat ~/.config/containers/storage.conf
   175  [storage]
   176    driver = "overlay"
   177    runroot = "/run/user/1000"
   178    graphroot = "/execdir/myuser/storage"
   179    [storage.options]
   180      mount_program = "/bin/fuse-overlayfs"
   181  ```
   182  
   183  ### 8) Permission denied when running systemd within a Podman container
   184  
   185  When running systemd as PID 1 inside of a container on an SELinux
   186  separated machine, it needs to write to the cgroup file system.
   187  
   188  #### Symptom
   189  
   190  Systemd gets permission denied when attempting to write to the cgroup file
   191  system, and AVC messages start to show up in the audit.log file or journal on
   192  the system.
   193  
   194  #### Solution
   195  
   196  Newer versions of Podman (2.0 or greater) support running init based containers
   197  with a different SELinux labels, which allow the container process access to the
   198  cgroup file system. This feature requires container-selinux-2.132 or newer
   199  versions.
   200  
   201  Prior to Podman 2.0, the SELinux boolean `container_manage_cgroup` allows
   202  container processes to write to the cgroup file system. Turn on this boolean,
   203  on SELinux separated systems, to allow systemd to run properly in the container.
   204  Only do this on systems running older versions of Podman.
   205  
   206  `setsebool -P container_manage_cgroup true`
   207  
   208  ### 9) Newuidmap missing when running rootless Podman commands
   209  
   210  Rootless Podman requires the newuidmap and newgidmap programs to be installed.
   211  
   212  #### Symptom
   213  
   214  If you are running Podman or Buildah as a rootless user, you get an error complaining about
   215  a missing newuidmap executable.
   216  
   217  ```
   218  podman run -ti fedora sh
   219  command required for rootless mode with multiple IDs: exec: "newuidmap": executable file not found in $PATH
   220  ```
   221  
   222  #### Solution
   223  
   224  Install a version of shadow-utils that includes these executables.  Note that for RHEL and CentOS 7, at least the 7.7 release must be installed for support to be available.
   225  
   226  ### 10) rootless setup user: invalid argument
   227  
   228  Rootless Podman requires the user running it to have a range of UIDs listed in /etc/subuid and /etc/subgid.
   229  
   230  #### Symptom
   231  
   232  An user, either via --user or through the default configured for the image, is not mapped inside the namespace.
   233  
   234  ```
   235  podman run --rm -ti --user 1000000 alpine echo hi
   236  Error: container create failed: container_linux.go:344: starting container process caused "setup user: invalid argument"
   237  ```
   238  
   239  #### Solution
   240  
   241  Update the /etc/subuid and /etc/subgid with fields for users that look like:
   242  
   243  ```
   244  cat /etc/subuid
   245  johndoe:100000:65536
   246  test:165536:65536
   247  ```
   248  
   249  The format of this file is USERNAME:UID:RANGE
   250  
   251  * username as listed in /etc/passwd or getpwent.
   252  * The initial uid allocated for the user.
   253  * The size of the range of UIDs allocated for the user.
   254  
   255  This means johndoe is allocated UIDS 100000-165535 as well as his standard UID in the
   256  /etc/passwd file.
   257  
   258  You should ensure that each user has a unique range of uids, because overlapping UIDs,
   259  would potentially allow one user to attack another user. In addition, make sure
   260  that the range of uids you allocate can cover all uids that the container
   261  requires. For example, if the container has a user with uid 10000, ensure you
   262  have at least 10001 subuids.
   263  
   264  You could also use the usermod program to assign UIDs to a user.
   265  
   266  If you update either the /etc/subuid or /etc/subgid file, you need to
   267  stop all running containers and kill the pause process.  This is done
   268  automatically by the `system migrate` command, which can also be used
   269  to stop all the containers and kill the pause process.
   270  
   271  ```
   272  usermod --add-subuids 200000-201000 --add-subgids 200000-201000 johndoe
   273  grep johndoe /etc/subuid /etc/subgid
   274  /etc/subuid:johndoe:200000:1001
   275  /etc/subgid:johndoe:200000:1001
   276  ```
   277  
   278  ### 11) Changing the location of the Graphroot leads to permission denied
   279  
   280  When I change the graphroot storage location in storage.conf, the next time I
   281  run Podman I get an error like:
   282  
   283  ```
   284  # podman run -p 5000:5000 -it centos bash
   285  
   286  bash: error while loading shared libraries: /lib64/libc.so.6: cannot apply additional memory protection after relocation: Permission denied
   287  ```
   288  
   289  For example, the admin sets up a spare disk to be mounted at `/src/containers`,
   290  and points storage.conf at this directory.
   291  
   292  
   293  #### Symptom
   294  
   295  SELinux blocks containers from using random locations for overlay storage.
   296  These directories need to be labeled with the same labels as if the content was
   297  under /var/lib/containers/storage.
   298  
   299  #### Solution
   300  
   301  Tell SELinux about the new containers storage by setting up an equivalence record.
   302  This tells SELinux to label content under the new path, as if it was stored
   303  under `/var/lib/containers/storage`.
   304  
   305  ```
   306  semanage fcontext -a -e /var/lib/containers /srv/containers
   307  restorecon -R -v /srv/containers
   308  ```
   309  
   310  The semanage command above tells SELinux to setup the default labeling of
   311  `/srv/containers` to match `/var/lib/containers`.  The `restorecon` command
   312  tells SELinux to apply the labels to the actual content.
   313  
   314  Now all new content created in these directories will automatically be created
   315  with the correct label.
   316  
   317  ### 12) Anonymous image pull fails with 'invalid username/password'
   318  
   319  Pulling an anonymous image that doesn't require authentication can result in an
   320  `invalid username/password` error.
   321  
   322  #### Symptom
   323  
   324  If you pull an anonymous image, one that should not require credentials, you can receive
   325  and `invalid username/password` error if you have credentials established in the
   326  authentication file for the target container registry that are no longer valid.
   327  
   328  ```
   329  podman run -it --rm docker://docker.io/library/alpine:latest ls
   330  Trying to pull docker://docker.io/library/alpine:latest...ERRO[0000] Error pulling image ref //alpine:latest: Error determining manifest MIME type for docker://alpine:latest: unable to retrieve auth token: invalid username/password
   331  Failed
   332  Error: unable to pull docker://docker.io/library/alpine:latest: unable to pull image: Error determining manifest MIME type for docker://alpine:latest: unable to retrieve auth token: invalid username/password
   333  ```
   334  
   335  This can happen if the authentication file is modified 'by hand' or if the credentials
   336  are established locally and then the password is updated later in the container registry.
   337  
   338  #### Solution
   339  
   340  Depending upon which container tool was used to establish the credentials, use `podman logout`
   341  or `docker logout` to remove the credentials from the authentication file.
   342  
   343  ### 13) Running Podman inside a container causes container crashes and inconsistent states
   344  
   345  Running Podman in a container and forwarding some, but not all, of the required host directories can cause inconsistent container behavior.
   346  
   347  #### Symptom
   348  
   349  After creating a container with Podman's storage directories mounted in from the host and running Podman inside a container, all containers show their state as "configured" or "created", even if they were running or stopped.
   350  
   351  #### Solution
   352  
   353  When running Podman inside a container, it is recommended to mount at a minimum `/var/lib/containers/storage/` as a volume.
   354  Typically, you will not mount in the host version of the directory, but if you wish to share containers with the host, you can do so.
   355  If you do mount in the host's `/var/lib/containers/storage`, however, you must also mount in the host's `/run/libpod` and `/run/containers/storage` directories.
   356  Not doing this will cause Podman in the container to detect that temporary files have been cleared, leading it to assume a system restart has taken place.
   357  This can cause Podman to reset container states and lose track of running containers.
   358  
   359  For running containers on the host from inside a container, we also recommend the [Podman remote client](docs/tutorials/remote_client.md), which only requires a single socket to be mounted into the container.
   360  
   361  ### 14) Rootless 'podman build' fails EPERM on NFS:
   362  
   363  NFS enforces file creation on different UIDs on the server side and does not understand user namespace, which rootless Podman requires.
   364  When a container root process like YUM attempts to create a file owned by a different UID, NFS Server denies the creation.
   365  NFS is also a problem for the file locks when the storage is on it.  Other distributed file systems (for example: Lustre, Spectrum Scale, the General Parallel File System (GPFS)) are also not supported when running in rootless mode as these file systems do not understand user namespace.
   366  
   367  #### Symptom
   368  ```console
   369  $ podman build .
   370  ERRO[0014] Error while applying layer: ApplyLayer exit status 1 stdout:  stderr: open /root/.bash_logout: permission denied
   371  error creating build container: Error committing the finished image: error adding layer with blob "sha256:a02a4930cb5d36f3290eb84f4bfa30668ef2e9fe3a1fb73ec015fc58b9958b17": ApplyLayer exit status 1 stdout:  stderr: open /root/.bash_logout: permission denied
   372  ```
   373  
   374  #### Solution
   375  Choose one of the following:
   376    * Setup containers/storage in a different directory, not on an NFS share.
   377      * Create a directory on a local file system.
   378      * Edit `~/.config/containers/containers.conf` and point the `volume_path` option to that local directory. (Copy /usr/share/containers/containers.conf if ~/.config/containers/containers.conf does not exist)
   379    * Otherwise just run Podman as root, via `sudo podman`
   380  
   381  ### 15) Rootless 'podman build' fails when using OverlayFS:
   382  
   383  The Overlay file system (OverlayFS) requires the ability to call the `mknod` command when creating whiteout files
   384  when extracting an image.  However, a rootless user does not have the privileges to use `mknod` in this capacity.
   385  
   386  #### Symptom
   387  ```console
   388  podman build --storage-driver overlay .
   389  STEP 1: FROM docker.io/ubuntu:xenial
   390  Getting image source signatures
   391  Copying blob edf72af6d627 done
   392  Copying blob 3e4f86211d23 done
   393  Copying blob 8d3eac894db4 done
   394  Copying blob f7277927d38a done
   395  Copying config 5e13f8dd4c done
   396  Writing manifest to image destination
   397  Storing signatures
   398  Error: error creating build container: Error committing the finished image: error adding layer with blob "sha256:8d3eac894db4dc4154377ad28643dfe6625ff0e54bcfa63e0d04921f1a8ef7f8": Error processing tar file(exit status 1): operation not permitted
   399  $ podman build .
   400  ERRO[0014] Error while applying layer: ApplyLayer exit status 1 stdout:  stderr: open /root/.bash_logout: permission denied
   401  error creating build container: Error committing the finished image: error adding layer with blob "sha256:a02a4930cb5d36f3290eb84f4bfa30668ef2e9fe3a1fb73ec015fc58b9958b17": ApplyLayer exit status 1 stdout:  stderr: open /root/.bash_logout: permission denied
   402  ```
   403  
   404  #### Solution
   405  Choose one of the following:
   406    * Complete the build operation as a privileged user.
   407    * Install and configure fuse-overlayfs.
   408      * Install the fuse-overlayfs package for your Linux Distribution.
   409      * Add `mount_program = "/usr/bin/fuse-overlayfs"` under `[storage.options]` in your `~/.config/containers/storage.conf` file.
   410  
   411  ### 16) RHEL 7 and CentOS 7 based `init` images don't work with cgroup v2
   412  
   413  The systemd version shipped in RHEL 7 and CentOS 7 doesn't have support for cgroup v2.  Support for cgroup V2 requires version 230 of systemd or newer, which
   414  was never shipped or supported on RHEL 7 or CentOS 7.
   415  
   416  #### Symptom
   417  ```console
   418  
   419  sh# podman run --name test -d registry.access.redhat.com/rhel7-init:latest && sleep 10 && podman exec test systemctl status
   420  c8567461948439bce72fad3076a91ececfb7b14d469bfa5fbc32c6403185beff
   421  Failed to get D-Bus connection: Operation not permitted
   422  Error: non zero exit code: 1: OCI runtime error
   423  ```
   424  
   425  #### Solution
   426  You'll need to either:
   427  
   428  * configure the host to use cgroup v1
   429  
   430  ```
   431  On Fedora you can do:
   432  # dnf install -y grubby
   433  # grubby --update-kernel=ALL --args=”systemd.unified_cgroup_hierarchy=0"
   434  # reboot
   435  ```
   436  
   437  * update the image to use an updated version of systemd.
   438  
   439  ### 17) rootless containers exit once the user session exits
   440  
   441  You need to set lingering mode through loginctl to prevent user processes to be killed once
   442  the user session completed.
   443  
   444  #### Symptom
   445  
   446  Once the user logs out all the containers exit.
   447  
   448  #### Solution
   449  You'll need to either:
   450  
   451  * loginctl enable-linger $UID
   452  
   453  or as root if your user has not enough privileges.
   454  
   455  * sudo loginctl enable-linger $UID
   456  
   457  ### 18) `podman run` fails with "bpf create: permission denied error"
   458  
   459  The Kernel Lockdown patches deny eBPF programs when Secure Boot is enabled in the BIOS. [Matthew Garrett's post](https://mjg59.dreamwidth.org/50577.html) describes the relationship between Lockdown and Secure Boot and [Jan-Philip Gehrcke's](https://gehrcke.de/2019/09/running-an-ebpf-program-may-require-lifting-the-kernel-lockdown/) connects this with eBPF. [RH bug 1768125](https://bugzilla.redhat.com/show_bug.cgi?id=1768125) contains some additional details.
   460  
   461  #### Symptom
   462  
   463  Attempts to run podman result in
   464  
   465  ```Error: bpf create : Operation not permitted: OCI runtime permission denied error```
   466  
   467  #### Solution
   468  
   469  One workaround is to disable Secure Boot in your BIOS.
   470  
   471  ### 19) error creating libpod runtime: there might not be enough IDs available in the namespace
   472  
   473  Unable to pull images
   474  
   475  #### Symptom
   476  
   477  ```console
   478  $ podman unshare cat /proc/self/uid_map
   479  	 0       1000          1
   480  ```
   481  
   482  #### Solution
   483  
   484  ```console
   485  $ podman system migrate
   486  ```
   487  
   488  Original command now returns
   489  
   490  ```
   491  $ podman unshare cat /proc/self/uid_map
   492  	 0       1000          1
   493  	 1     100000      65536
   494  ```
   495  
   496  Reference [subuid](https://man7.org/linux/man-pages/man5/subuid.5.html) and [subgid](https://man7.org/linux/man-pages/man5/subgid.5.html) man pages for more detail.
   497  
   498  ### 20) Passed-in devices or files can't be accessed in rootless container
   499  
   500  As a non-root user you have group access rights to a device or files that you
   501  want to pass into a rootless container with `--device=...` or `--volume=...`
   502  
   503  #### Symptom
   504  
   505  Any access inside the container is rejected with "Permission denied".
   506  
   507  #### Solution
   508  
   509  The runtime uses `setgroups(2)` hence the process looses all additional groups
   510  the non-root user has. Use the `--group-add keep-groups` flag to pass the
   511  user's supplementary group access into the container. Currently only available
   512  with the `crun` OCI runtime.
   513  
   514  ### 21) A rootless container running in detached mode is closed at logout
   515  
   516  When running a container with a command like `podman run --detach httpd` as
   517  a rootless user, the container is closed upon logout and is not kept running.
   518  
   519  #### Symptom
   520  
   521  When logging out of a rootless user session, all containers that were started
   522  in detached mode are stopped and are not kept running.  As the root user, these
   523  same containers would survive the logout and continue running.
   524  
   525  #### Solution
   526  
   527  When systemd notes that a session that started a Podman container has exited,
   528  it will also stop any containers that has been associated with it.  To avoid
   529  this, use the following command before logging out: `loginctl enable-linger`.
   530  To later revert the linger functionality, use `loginctl disable-linger`.
   531  
   532  LOGINCTL(1), SYSTEMD(1)
   533  
   534  ### 22) Containers default detach keys conflict with shell history navigation
   535  
   536  Podman defaults to `ctrl-p,ctrl-q` to detach from a running containers. The
   537  bash and zsh shells default to ctrl-p for the displaying of the previous
   538  command.  This causes issues when running a shell inside of a container.
   539  
   540  #### Symptom
   541  
   542  With the default detach key combo ctrl-p,ctrl-q, shell history navigation
   543  (tested in bash and zsh) using ctrl-p to access the previous command will not
   544  display this previous command. Or anything else.  Conmon is waiting for an
   545  additional character to see if the user wants to detach from the container.
   546  Adding additional characters to the command will cause it to be displayed along
   547  with the additional character. If the user types ctrl-p a second time the shell
   548  display the 2nd to last command.
   549  
   550  #### Solution
   551  
   552  The solution to this is to change the default detach_keys. For example in order
   553  to change the defaults to `ctrl-q,ctrl-q` use the `--detach-keys` option.
   554  
   555  ```
   556  podman run -ti --detach-keys ctrl-q,ctrl-q fedora sh
   557  ```
   558  
   559  To make this change the default for all containers, users can modify the
   560  containers.conf file. This can be done simply in your home directory, but adding the
   561  following lines to users containers.conf
   562  
   563  ```
   564  $ cat >> ~/.config/containers/containers.conf < _eof
   565  [engine]
   566  detach_keys="ctrl-q,ctrl-q"
   567  _eof
   568  ```
   569  
   570  In order to effect root running containers and all users, modify the system
   571  wide defaults in /etc/containers/containers.conf
   572  
   573  
   574  ### 23) Container with exposed ports won't run in a pod
   575  
   576  A container with ports that have been published with the `--publish` or `-p` option
   577  can not be run within a pod.
   578  
   579  #### Symptom
   580  
   581  ```
   582  $ podman pod create --name srcview -p 127.0.0.1:3434:3434 -p 127.0.0.1:7080:7080 -p 127.0.0.1:3370:3370                        4b2f4611fa2cbd60b3899b936368c2b3f4f0f68bc8e6593416e0ab8ecb0a3f1d
   583  
   584  $ podman run --pod srcview --name src-expose -p 3434:3434 -v "${PWD}:/var/opt/localrepo":Z,ro sourcegraph/src-expose:latest serve /var/opt/localrepo
   585  Error: cannot set port bindings on an existing container network namespace
   586  ```
   587  
   588  #### Solution
   589  
   590  This is a known limitation.  If a container will be run within a pod, it is not necessary
   591  to publish the port for the containers in the pod. The port must only be published by the
   592  pod itself.  Pod network stacks act like the network stack on the host - you have a
   593  variety of containers in the pod, and programs in the container, all sharing a single
   594  interface and IP address, and associated ports. If one container binds to a port, no other
   595  container can use that port within the pod while it is in use. Containers in the pod can
   596  also communicate over localhost by having one container bind to localhost in the pod, and
   597  another connect to that port.
   598  
   599  In the example from the symptom section, dropping the `-p 3434:3434` would allow the
   600  `podman run` command to complete, and the container as part of the pod would still have
   601  access to that port.  For example:
   602  
   603  ```
   604  $ podman run --pod srcview --name src-expose -v "${PWD}:/var/opt/localrepo":Z,ro sourcegraph/src-expose:latest serve /var/opt/localrepo
   605  ```
   606  
   607  ### 24) Podman container images fail with `fuse: device not found` when run
   608  
   609  Some container images require that the fuse kernel module is loaded in the kernel
   610  before they will run with the fuse filesystem in play.
   611  
   612  #### Symptom
   613  
   614  When trying to run the container images found at quay.io/podman, quay.io/containers
   615  registry.access.redhat.com/ubi8 or other locations, an error will sometimes be returned:
   616  
   617  ```
   618  ERRO error unmounting /var/lib/containers/storage/overlay/30c058cdadc888177361dd14a7ed7edab441c58525b341df321f07bc11440e68/merged: invalid argument
   619  error mounting container "1ae176ca72b3da7c70af31db7434bcf6f94b07dbc0328bc7e4e8fc9579d0dc2e": error mounting build container "1ae176ca72b3da7c70af31db7434bcf6f94b07dbc0328bc7e4e8fc9579d0dc2e": error creating overlay mount to /var/lib/containers/storage/overlay/30c058cdadc888177361dd14a7ed7edab441c58525b341df321f07bc11440e68/merged: using mount program /usr/bin/fuse-overlayfs: fuse: device not found, try 'modprobe fuse' first
   620  fuse-overlayfs: cannot mount: No such device
   621  : exit status 1
   622  ERRO exit status 1
   623  ```
   624  
   625  #### Solution
   626  
   627  If you encounter a `fuse: device not found` error when running the container image, it is likely that
   628  the fuse kernel module has not been loaded on your host system.  Use the command `modprobe fuse` to load the
   629  module and then run the container image afterwards.  To enable this automatically at boot time, you can add a configuration
   630  file to `/etc/modules.load.d`.  See `man modules-load.d` for more details.
   631  
   632  ### 25) podman run --rootfs link/to//read/only/dir does not work
   633  
   634  An error such as "OCI runtime error" on a read-only filesystem or the error "{image} is not an absolute path or is a symlink" are often times indicators for this issue.  For more details, review this [issue](
   635  https://github.com/containers/podman/issues/5895).
   636  
   637  #### Symptom
   638  
   639  Rootless Podman requires certain files to exist in a file system in order to run.
   640  Podman will create /etc/resolv.conf, /etc/hosts and other file descriptors on the rootfs in order
   641  to mount volumes on them.
   642  
   643  #### Solution
   644  
   645  Run the container once in read/write mode, Podman will generate all of the FDs on the rootfs, and
   646  from that point forward you can run with a read-only rootfs.
   647  
   648  $ podman run --rm --rootfs /path/to/rootfs true
   649  
   650  The command above will create all the missing directories needed to run the container.
   651  
   652  After that, it can be used in read only mode, by multiple containers at the same time:
   653  
   654  $ podman run --read-only --rootfs /path/to/rootfs ....
   655  
   656  Another option would be to create an overlay file system on the directory as a lower and then
   657  then allow podman to create the files on the upper.
   658  
   659  ### 26) Running containers with CPU limits fails with a permissions error
   660  
   661  On some systemd-based systems, non-root users do not have CPU limit delegation
   662  permissions. This causes setting CPU limits to fail.
   663  
   664  #### Symptom
   665  
   666  Running a container with a CPU limit options such as `--cpus`, `--cpu-period`,
   667  or `--cpu-quota` will fail with an error similar to the following:
   668  
   669      Error: opening file `cpu.max` for writing: Permission denied: OCI runtime permission denied error
   670  
   671  This means that CPU limit delegation is not enabled for the current user.
   672  
   673  #### Solution
   674  
   675  You can verify whether CPU limit delegation is enabled by running the following command:
   676  
   677      cat "/sys/fs/cgroup/user.slice/user-$(id -u).slice/user@$(id -u).service/cgroup.controllers"
   678  
   679  Example output might be:
   680  
   681      memory pids
   682  
   683  In the above example, `cpu` is not listed, which means the current user does
   684  not have permission to set CPU limits.
   685  
   686  If you want to enable CPU limit delegation for all users, you can create the
   687  file `/etc/systemd/system/user@.service.d/delegate.conf` with the contents:
   688  
   689      [Service]
   690      Delegate=memory pids cpu io
   691  
   692  After logging out and loggin back in, you should have permission to set CPU
   693  limits.
   694  
   695  ### 26) `exec container process '/bin/sh': Exec format error` (or another binary than `bin/sh`)
   696  
   697  This can happen when running a container from an image for another architecture than the one you are running on.
   698  
   699  For example, if a remote repository only has, and thus send you, a `linux/arm64` _OS/ARCH_ but you run on `linux/amd64` (as happened in https://github.com/openMF/community-app/issues/3323 due to https://github.com/timbru31/docker-ruby-node/issues/564).
   700  
   701  ### 27) `Error: failed to create sshClient: Connection to bastion host (ssh://user@host:22/run/user/.../podman/podman.sock) failed.: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain`
   702  
   703  In some situations where the client is not on the same machine as where the podman daemon is running the client key could be using a cipher not supported by the host. This indicates an issue with one's SSH config. Until remedied using podman over ssh
   704  with a pre-shared key will be impossible.
   705  
   706  #### Symptom
   707  
   708  The accepted ciphers per `/etc/crypto-policies/back-ends/openssh.config` are not one that was used to create the public/private key pair that was transferred over to the host for ssh authentication.
   709  
   710  You can confirm this is the case by attempting to connect to the host via `podman-remote info` from the client and simultaneously on the host running `journalctl -f` and watching for the error `userauth_pubkey: key type ssh-rsa not in PubkeyAcceptedAlgorithms [preauth]`.
   711  
   712  #### Solution
   713  
   714  Create a new key using a supported algorithm e.g. ecdsa:
   715  
   716  `ssh-keygen -t ecdsa -f ~/.ssh/podman`
   717  
   718  Then copy the new id over:
   719  
   720  `ssh-copy-id -i ~/.ssh/podman.pub user@host`
   721  
   722  And then re-add the connection (removing the old one if necessary):
   723  
   724  `podman-remote system connection add myuser --identity ~/.ssh/podman ssh://user@host/run/user/1000/podman/podman.sock`
   725  
   726  And now this should work:
   727  
   728  `podman-remote info`
   729  
   730  ---
   731  ### 28) Rootless CNI networking fails in RHEL with Podman v2.2.1 to v3.0.1.
   732  
   733  A failure is encountered when trying to use networking on a rootless
   734  container in Podman v2.2.1 through v3.0.1 on RHEL.  This error does not
   735  occur on other Linux Distributions.
   736  
   737  #### Symptom
   738  
   739  A rootless container is created using a CNI network, but the `podman run` command
   740  returns an error that an image must be built.
   741  
   742  #### Solution
   743  
   744  In order to use a CNI network in a rootless container on RHEL,
   745  an Infra container image for CNI-in-slirp4netns must be created.  The
   746  instructions for building the Infra container image can be found for
   747  v2.2.1 [here](https://github.com/containers/podman/tree/v2.2.1-rhel/contrib/rootless-cni-infra),
   748  and for v3.0.1 [here](https://github.com/containers/podman/tree/v3.0.1-rhel/contrib/rootless-cni-infra).
   749  ### 29) Container related firewall rules are lost after reloading firewalld
   750  Container network can't be reached after `firewall-cmd --reload` and `systemctl restart firewalld` Running `podman network reload` will fix it but it has to be done manually.
   751  
   752  #### Symptom
   753  The firewall rules created by podman are lost when the firewall is reloaded.
   754  
   755  #### Solution
   756  [@ranjithrajaram](https://github.com/containers/podman/issues/5431#issuecomment-847758377) has created a systemd-hook to fix this issue
   757  
   758  1) For "firewall-cmd --reload", create a systemd unit file with the following
   759  ```
   760  [Unit]
   761  Description=firewalld reload hook - run a hook script on firewalld reload
   762  Wants=dbus.service
   763  After=dbus.service
   764  
   765  [Service]
   766  Type=simple
   767  ExecStart=/bin/bash -c '/bin/busctl monitor --system --match "interface=org.fedoraproject.FirewallD1,member=Reloaded" --match "interface=org.fedoraproject.FirewallD1,member=PropertiesChanged" | while read -r line ; do podman network reload --all ; done'
   768  
   769  [Install]
   770  WantedBy=default.target
   771  ```
   772  2) For "systemctl restart firewalld", create a systemd unit file with the following
   773  ```
   774  [Unit]
   775  Description=podman network reload
   776  Wants=firewalld.service
   777  After=firewalld.service
   778  PartOf=firewalld.service
   779  
   780  [Service]
   781  Type=simple
   782  RemainAfterExit=yes
   783  ExecStart=/usr/bin/podman network reload --all
   784  
   785  [Install]
   786  WantedBy=default.target
   787  ```
   788  However, If you use busctl monitor then you can't get machine-readable output on `RHEL 8`.
   789  Since it doesn't have `busctl -j` as mentioned here by [@yrro](https://github.com/containers/podman/issues/5431#issuecomment-896943018).
   790  
   791  For RHEL 8, you can use the following one-liner bash script.
   792  ```
   793  [Unit]
   794  Description=Redo podman NAT rules after firewalld starts or reloads
   795  Wants=dbus.service
   796  After=dbus.service
   797  Requires=firewalld.service
   798  
   799  [Service]
   800  Type=simple
   801  ExecStart=/bin/bash -c "dbus-monitor --profile --system 'type=signal,sender=org.freedesktop.DBus,path=/org/freedesktop/DBus,interface=org.freedesktop.DBus,member=NameAcquired,arg0=org.fedoraproject.FirewallD1' 'type=signal,path=/org/fedoraproject/FirewallD1,interface=org.fedoraproject.FirewallD1,member=Reloaded' | sed -u '/^#/d' | while read -r type timestamp serial sender destination path interface member _junk; do if [[ $type = '#'* ]]; then continue; elif [[ $interface = org.freedesktop.DBus && $member = NameAcquired ]]; then echo 'firewalld started'; podman network reload --all; elif [[ $interface = org.fedoraproject.FirewallD1 && $member = Reloaded ]]; then echo 'firewalld reloaded'; podman network reload --all; fi; done"
   802  Restart=Always
   803  
   804  [Install]
   805  WantedBy=default.target
   806  ```
   807  `busctl-monitor` is almost usable in `RHEL 8`, except that it always outputs two bogus events when it starts up,
   808  one of which is (in its only machine-readable format) indistinguishable from the `NameOwnerChanged` that you get when firewalld starts up.
   809  This means you would get an extra `podman network reload --all` when this unit starts.
   810  
   811  Apart from this, you can use the following systemd service with the python3 code.
   812  
   813  ```
   814  [Unit]
   815  Description=Redo podman NAT rules after firewalld starts or reloads
   816  Wants=dbus.service
   817  Requires=firewalld.service
   818  After=dbus.service
   819  
   820  [Service]
   821  Type=simple
   822  ExecStart=/usr/bin/python  /path/to/python/code/podman-redo-nat.py
   823  Restart=always
   824  
   825  [Install]
   826  WantedBy=default.target
   827  ```
   828  The code reloads podman network twice when you use `systemctl restart firewalld`.
   829  ```
   830  import dbus
   831  from gi.repository import GLib
   832  from dbus.mainloop.glib import DBusGMainLoop
   833  import subprocess
   834  import sys
   835  
   836  # I'm a bit confused on the return values in the code
   837  # Not sure if they are needed.
   838  
   839  def reload_podman_network():
   840      try:
   841          subprocess.run(["podman","network","reload","--all"],timeout=90)
   842          # I'm not sure about this part
   843          sys.stdout.write("podman network reload done\n")
   844          sys.stdout.flush()
   845      except subprocess.TimeoutExpired as t:
   846          sys.stderr.write(f"Podman reload failed due to Timeout {t}")
   847      except subprocess.CalledProcessError as e:
   848          sys.stderr.write(f"Podman reload failed due to {e}")
   849      except Exception as e:
   850          sys.stderr.write(f"Podman reload failed with an Unhandled Exception {e}")
   851  
   852      return False
   853  
   854  def signal_handler(*args, **kwargs):
   855      if kwargs.get('member') == "Reloaded":
   856          reload_podman_network()
   857      elif kwargs.get('member') == "NameOwnerChanged":
   858          reload_podman_network()
   859      else:
   860          return None
   861      return None
   862  
   863  def signal_listener():
   864      try:
   865          DBusGMainLoop(set_as_default=True)# Define the loop.
   866          loop = GLib.MainLoop()
   867          system_bus = dbus.SystemBus()
   868          # Listens to systemctl restart firewalld with a filter added, will cause podman network to be reloaded twice
   869          system_bus.add_signal_receiver(signal_handler,dbus_interface='org.freedesktop.DBus',arg0='org.fedoraproject.FirewallD1',member_keyword='member')
   870          # Listens to firewall-cmd --reload
   871          system_bus.add_signal_receiver(signal_handler,dbus_interface='org.fedoraproject.FirewallD1',signal_name='Reloaded',member_keyword='member')
   872          loop.run()
   873      except KeyboardInterrupt:
   874          loop.quit()
   875          sys.exit(0)
   876      except Exception as e:
   877          loop.quit()
   878          sys.stderr.write(f"Error occured {e}")
   879          sys.exit(1)
   880  
   881  if __name__ == "__main__":
   882      signal_listener()
   883  ```
   884  ### 30) Podman run fails with `ERRO[0000] XDG_RUNTIME_DIR directory "/run/user/0" is not owned by the current user` or `Error: error creating tmpdir: mkdir /run/user/1000: permission denied`.
   885  
   886  A failure is encountered when performing `podman run` with a warning `XDG_RUNTIME_DIR is pointing to a path which is not writable. Most likely podman will fail.`
   887  
   888  #### Symptom
   889  
   890  A rootless container is being invoked with cgroup configuration as `cgroupv2` for user with missing or invalid **systemd session**.
   891  
   892  Example cases
   893  ```bash
   894  # su user1 -c 'podman images'
   895  ERRO[0000] XDG_RUNTIME_DIR directory "/run/user/0" is not owned by the current user
   896  ```
   897  ```bash
   898  # su - user1 -c 'podman images'
   899  Error: error creating tmpdir: mkdir /run/user/1000: permission denied
   900  ```
   901  
   902  #### Solution
   903  
   904  Podman expects a valid login session for the `rootless+cgroupv2` use-case. Podman execution is expected to fail if the login session is not present. In most cases, podman will figure out a solution on its own but if `XDG_RUNTIME_DIR` is pointing to a path that is not writable execution will most fail. Typical scenarious of such cases are seen when users are trying to use Podman with `su - <user> -c '<podman-command>`, or `sudo -l` and badly configured systemd session.
   905  
   906  Resolution steps
   907  
   908  * Before invoking Podman command create a valid login session for your rootless user using `loginctl enable-linger <username>`
   909  * If `loginctl` is unavailable you can also try logging in via `ssh` i.e `ssh <username>@localhost`.