-
Notifications
You must be signed in to change notification settings - Fork 964
Description
Please confirm
- I have searched existing issues to check if an issue already exists for the bug I encountered.
Distribution
Ubuntu
Distribution version
24.04
Output of "snap list --all lxd core20 core22 core24 snapd"
`latest/edge`Output of "lxc info" or system info if it fails
n/aIssue description
Listing instance information that comes from the DB is quick and optimal. Listing just the instance names with lxc list -f csv -c n will show like that in lxc monitor --pretty:
time="2025-10-08T15:28:14-04:00" level=debug msg="Handling API request" ip=@ method=GET protocol=unix url=/1.0 username=sdeziel
time="2025-10-08T15:28:14-04:00" level=debug msg="Handling API request" ip=@ method=GET protocol=unix url="/1.0/instances?filter=&recursion=1" username=sdeziel
Listing any column that requires to get state information not available int the DB seems to be much less optimal. When asking for the instance's PID (-c p) or instance's process count (-c N), all the state information is requested from every existing instance.
Here with the following instances:
$ lxc list -f csv -c n
ansible
ansible01
autopkg
juju
noble-builder
pylxd
v1
wine-games
A lxc list -f csv -c p would look like this in lxc monitor --pretty:
time="2025-10-08T15:37:43-04:00" level=debug msg="Handling API request" ip=@ method=GET protocol=unix url=/1.0 username=sdeziel
time="2025-10-08T15:37:43-04:00" level=debug msg="Handling API request" ip=@ method=GET protocol=unix url="/1.0/instances?filter=&recursion=2" username=sdeziel
time="2025-10-08T15:37:43-04:00" level=debug msg="Sending request to LXD" etag= method=GET url="https://custom.socket/1.0"
time="2025-10-08T15:37:43-04:00" level=debug msg="GetInstanceUsage started" driver=zfs instance=autopkg pool=default project=default
time="2025-10-08T15:37:43-04:00" level=debug msg="GetInstanceUsage started" driver=zfs instance=ansible pool=default project=default
time="2025-10-08T15:37:43-04:00" level=debug msg="GetInstanceUsage started" driver=zfs instance=ansible01 pool=default project=default
time="2025-10-08T15:37:43-04:00" level=debug msg="Sending request to LXD" etag= method=GET url="https://custom.socket/1.0/state"
time="2025-10-08T15:37:43-04:00" level=debug msg="GetInstanceUsage finished" driver=zfs instance=ansible01 pool=default project=default
time="2025-10-08T15:37:43-04:00" level=debug msg="GetInstanceUsage finished" driver=zfs instance=ansible pool=default project=default
time="2025-10-08T15:37:43-04:00" level=debug msg="GetInstanceUsage finished" driver=zfs instance=autopkg pool=default project=default
time="2025-10-08T15:37:43-04:00" level=debug msg="GetInstanceUsage started" driver=zfs instance=v1 pool=default project=default
time="2025-10-08T15:37:43-04:00" level=debug msg="GetInstanceUsage started" driver=zfs instance=juju pool=default project=default
time="2025-10-08T15:37:43-04:00" level=debug msg="GetInstanceUsage started" driver=zfs instance=noble-builder pool=default project=default
time="2025-10-08T15:37:43-04:00" level=debug msg="GetInstanceUsage started" driver=zfs instance=pylxd pool=default project=default
time="2025-10-08T15:37:43-04:00" level=debug msg="GetInstanceUsage finished" driver=zfs instance=v1 pool=default project=default
time="2025-10-08T15:37:43-04:00" level=debug msg="GetInstanceUsage finished" driver=zfs instance=noble-builder pool=default project=default
time="2025-10-08T15:37:43-04:00" level=debug msg="GetInstanceUsage finished" driver=zfs instance=juju pool=default project=default
time="2025-10-08T15:37:43-04:00" level=debug msg="GetInstanceUsage finished" driver=zfs instance=pylxd pool=default project=default
time="2025-10-08T15:37:43-04:00" level=debug msg="GetInstanceUsage started" driver=zfs instance=wine-games pool=default project=default
time="2025-10-08T15:37:43-04:00" level=debug msg="GetInstanceUsage finished" driver=zfs instance=wine-games pool=default project=default
Those GetInstanceUsage are about getting disk usage information which is pointless and potentially slow, especially on ceph and/or when there are many instances.
Steps to reproduce
In a terminal, run:
lxc monitor --pretty | grep -E ' (instance|url)='
In another terminal, run one of:
lxc list -f csv -c n # name check, fast
lxc list -f csv -c p # PID check, slow
lxc list -f csv -c N # proc count, slow
Information to attach
- Any relevant kernel output (
dmesg) - Instance log (
lxc info NAME --show-log) - Instance configuration (
lxc config show NAME --expanded) - Main daemon log (at
/var/log/lxd/lxd.logor/var/snap/lxd/common/lxd/logs/lxd.log) - Output of the client with
--debug - Output of the daemon with
--debug(or uselxc monitorwhile reproducing the issue)