Skip to content

Health checks fail silently for services in transient store #28483

@libohad-dev

Description

@libohad-dev

Issue Description

Running on Debian Forky with podman version 5.8.1

The effects are a bit similar to #25034 — quadlet services configured with a health check remain in starting status indefinitely. Unlike that earlier report, in this case the problem only occurs for services configured to run in the transient store.

This unit file reliably reproduces this behavior:

[Unit]
Description=Minimal healthcheck reproduction case for transient store

[Container]
Image=docker.io/library/alpine:latest
ContainerName=healthcheck-repro
Exec=sleep infinity
PodmanArgs=--transient-store
HealthCmd=true
HealthInterval=10s
HealthStartPeriod=60s

[Install]
WantedBy=default.target

[Service]
Restart=always

On service start journalctl --grep healthcheck shows a flurry of timer firings within a few seconds, but podman inspect on the container shows no logs of health checks. Absent the PodmanArgs line the healthcheck works reliably.

I guess that's perfectly expectable since PodmanArgs is a completely opaque string. Quadlet units will need to gain awareness of the transient store as a new configuration knob to reliably direct healthchecks to containers running in it. Mostly what I'm surprised by is that this never manifests as a service health failure. I'm frankly conflicted as to whether this ticket belongs more on systemd's side than here.

Steps to reproduce the issue

Steps to reproduce the issue

  1. Set up a quadlet service with PodmanArgs=--transient-store and healthchecks
  2. Start the service
  3. The service remains in starting status indefinitely

Describe the results you received

The service is never marked as healthy

Describe the results you expected

I guess one of two options:

  1. Support healthchecks for quadlet services running in the transient store
  2. Mark a clear failure for such services with some reasonable failure cause. Ideally also to explicitly document such a setup as unsupported.

podman info output

host:
  arch: amd64
  buildahVersion: 1.43.0
  cgroupControllers:
  - cpu
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon_2.1.13+ds1-2_amd64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.13, commit: unknown'
  cpuUtilization:
    idlePercent: 97.89
    systemPercent: 0.96
    userPercent: 1.15
  cpus: 20
  databaseBackend: sqlite
  distribution:
    codename: forky
    distribution: debian
    version: unknown
  eventLogger: journald
  freeLocks: 2036
  hostname: fenrir
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
  kernel: 6.19.10+deb14-amd64
  linkmode: dynamic
  logDriver: journald
  memFree: 38110179328
  memTotal: 67264385024
  networkBackend: netavark
  networkBackendInfo:
    backend: netavark
    defaultNetwork: podman
    dns:
      package: aardvark-dns_1.16.0-2_amd64
      path: /usr/lib/podman/aardvark-dns
      version: aardvark-dns 1.16.0
    package: netavark_1.16.1-3.1_amd64
    path: /usr/lib/podman/netavark
    version: netavark 1.16.1
  ociRuntime:
    name: runc
    package: runc_1.3.5+ds1-1_amd64
    path: /usr/bin/runc
    version: |-
      runc version 1.3.5+ds1
      commit: 1.3.5+ds1-1
      spec: 1.2.1
      go: go1.26.1
      libseccomp: 2.6.0
  os: linux
  pasta:
    executable: /usr/bin/pasta
    package: passt_0.0~git20260120.386b5f5-1_amd64
    version: |
      pasta 0.0~git20260120.386b5f5-1
      Copyright Red Hat
      GNU General Public License, version 2 or later
        <https://www.gnu.org/licenses/old-licenses/gpl-2.0.html>
      This is free software: you are free to change and redistribute it.
      There is NO WARRANTY, to the extent permitted by law.
  remoteSocket:
    exists: true
    path: /run/user/1000/podman/podman.sock
  rootlessNetworkCmd: pasta
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: false
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns_1.3.3-1_amd64
    version: |-
      slirp4netns version 1.3.3
      commit: 944fa94090e1fd1312232cbc0e6b43585553d824
      libslirp: 4.9.1
      SLIRP_CONFIG_VERSION_MAX: 6
      libseccomp: 2.6.0
  swapFree: 1023406080
  swapTotal: 1023406080
  uptime: 2h 21m 8.00s (Approximately 0.08 days)
  variant: ""
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries: {}
store:
  configFile: /home/ohad/.config/containers/storage.conf
  containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /home/ohad/.local/share/containers/storage
  graphRootAllocated: 459522514944
  graphRootUsed: 365566750720
  graphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Supports shifting: "false"
    Supports volatile: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 277
  runRoot: /run/user/1000/containers
  transientStore: false
  volumePath: /home/ohad/.local/share/containers/storage/volumes
version:
  APIVersion: 5.8.1
  BuildOrigin: Debian
  Built: 1774364999
  BuiltTime: Tue Mar 24 17:09:59 2026
  GitCommit: ""
  GoVersion: go1.26.1
  Os: linux
  OsArch: linux/amd64
  Version: 5.8.1

Podman in a container

No

Privileged Or Rootless

None

Upstream Latest Release

Yes

Additional environment details

Additional environment details

Additional information

Additional information like issue happens only occasionally or issue happens with a particular architecture or on a particular setting

Metadata

Metadata

Assignees

No one assigned

    Labels

    kind/bugCategorizes issue or PR as related to a bug.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions