Skip to content

cpuset cgroup directories accumulate without cleanup when SCHED_DEADLINE CallbackGroups restart #1265

@atsushi421

Description

@atsushi421

Summary

When the configurator_node is running and a CallbackGroup configured with SCHED_DEADLINE + CPU affinity restarts repeatedly (e.g., the target application crashes and relaunches), /sys/fs/cgroup/cpuset/<n> directories grow without bound. They are only cleaned up in the destructor when the configurator node itself shuts down.

Details

  • set_affinity_by_cgroup() (thread_configurator_node.cpp:308) creates a new cpuset directory each time by monotonically incrementing cgroup_num_++. It never reuses or removes old directories.
  • When a CallbackGroup's thread_id changes (i.e., the target app restarted), callback_group_callback() (line 462–474) re-applies the configuration via issue_syscalls()set_affinity_by_cgroup(), creating a new directory while the old one remains.
  • After on_all_configured() (line 552–566), all applied flags are reset, so the next restart cycle will go through the same path again.
  • The destructor (line 277–284) calls rmdir() for all created directories, but this only runs when the configurator node shuts down — not during runtime.
  • In long-running deployments where target applications may restart many times, this leads to an ever-growing number of stale cpuset directories under /sys/fs/cgroup/cpuset/.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions