Skip to content

ContainerID.scope/cgroup-procs: no such file or directory #4620

Open
@xuegege5290

Description

@xuegege5290

Description

failed to write 296291: open /sys/fs/cgroup/systemd/kubepods.slice/.../crio-66d6d4ac9851cfcab8400277ad96770ce52c1d75eeac29046753875056eacaed.scope/cgroup-procs: no such file or directory

Image

I discovered through the system logs that when this issue occurs, the systemd logs will indicate that the corresponding containerID.scope has succeeded, which means that the containerID.scope service has exited, but the container still exists. Under normal circumstances, you wouldn't see containerID.scope succeeded. If you do encounter containerID.scope succeeded, it implies that the container has exited. I have no idea about the cause of this problem.

Jan 09 10:09:16 ceashare23 node-3 system[11]: crio_0644511bc2734f3ff9f9532f6dcd9f1b92597f269168b137ad0404c3a3118.scope: Succeeded.
Jan 09 10:09:16 ceashare23 node-3 system[11]: crio_0644511bc2734f3ff9f9532f6dcd9f1b92597f269168b137ad0404c3a3118.scope: Consumed 4.458% CPU time.

The corresponding service no longer exists, but the container is still running.

[root@ceashare23-node-3 kubepods-burstable-podf921ac10_59ef_4825_ac61_248f1989c789.slice]# systemctl status crio-0644511bc2734f5ff9f9532fdecddb9f1b92597fe269168b137ada040c3a3118.scope
Unit crio-0644511bc2734f5ff9f9532fdecddb9f1b92597fe269168b137ada040c3a3118.scope could not be found.

Normal circumstances, it should be like this:

 [root@ceashare23-node-3 kubepods-burstable-podf921ac10_59ef_4825_ac61_248f1989c789.slice]# systemctl status crio-d203819875e216b8fa4dc7763cb8dbe4a2cfc8e0877db05798c4a337fd7c4e18.scope
Warning: The unit file, source configuration file or drop-ins of crio-d203819875e216b8fa4dc7763cb8dbe4a2cfc8e0877db05798c4a337fd7c4e18.scope changed on d>
● crio-d203819875e216b8fa4dc7763cb8dbe4a2cfc8e0877db05798c4a337fd7c4e18.scope - libcontainer container d203819875e216b8fa4dc7763cb8dbe4a2cfc8e0877db05798>
   Loaded: loaded (/run/systemd/transient/crio-d203819875e216b8fa4dc7763cb8dbe4a2cfc8e0877db05798c4a337fd7c4e18.scope; transient)
Transient: yes
  Drop-In: /run/systemd/transient/crio-d203819875e216b8fa4dc7763cb8dbe4a2cfc8e0877db05798c4a337fd7c4e18.scope.d
           └─50-DevicePolicy.conf, 50-DeviceAllow.conf, 50-MemoryLimit.conf, 50-CPUShares.conf, 50-CPUQuotaPeriodSec.conf, 50-CPUQuota.conf
   Active: active (running) since Fri 2025-02-07 10:44:12 CST; 46min ago
    Tasks: 1
   Memory: 92.0K (limit: 1.0G)
      CPU: 334ms
   CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc17f3be_9ce2_4f7b_93a2_b9b2bcf3cfe1.slice/crio-d203819875e216b8fa4dc7763cb8dbe>
           └─2275598 sleep 3600s

Steps to reproduce the issue

Sometimes it happens again,not always

Describe the results you received and expected

ContainerID.scope has succeeded, which means that the containerID.scope service has exited, but the container still exists. Under normal circumstances, you wouldn't see containerID.scope succeeded. If you do encounter containerID.scope succeeded, it implies that the container has exited

What version of runc are you using?

runc --version
runc version 1.1.12
commit: v1.1.12-0-g51d5e946
spec: 1.0.2-dev
go: go1.20.13

Host OS information

No response

Host kernel information

Linux compute-node1 4.19.90-52.39 x86_64

[root@compute-node1 cgroup]# systemctl --version
systemd 243

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions