Open
Description
Summary
If you use community.general.lxc_container with a unprivileged user all features works (create, config, ...) except start a container.
A workaround is creating the correct environment var DBUS_SESSION_BUS_ADDRESS for the unprivileged user:
become_user: 'lxc_unprivileged_user'
become: true
community.general.lxc_container:
name: container name
state: started
container_config:
- "lxc.net.0.type = veth"
- "lxc.net.0.link = lxcbr0"
- "lxc.net.0.flags = up"
- "lxc.net.0.ipv4.address = '10.0.0.3'"
- "lxc.net.0.ipv4.gateway = 'auto''"
environment:
DBUS_SESSION_BUS_ADDRESS: 'unix:path=/run/user/10000/bus'
Issue Type
Bug Report
Component Name
community.general.lxc_container
Ansible Version
$ ansible --version
ansible [core 2.19.0b2]
config file = /home/user/.ansible.cfg
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
ansible collection location = /home/user/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.13.3 (main, Apr 10 2025, 21:38:51) [GCC 14.2.0] (/usr/bin/python3)
jinja version = 3.1.6
pyyaml version = 6.0.2 (with libyaml v0.2.5)
Community.general Version
$ ansible-galaxy collection list community.general
# /usr/lib/python3/dist-packages/ansible_collections
Collection Version
----------------- -------
community.general 10.6.0
Configuration
$ ansible-config dump --only-changed
CACHE_PLUGIN(/home/user/.ansible.cfg) = ansible.builtin.jsonfile
CACHE_PLUGIN_CONNECTION(/home/user/.ansible.cfg) = ~/.config/ansible/facts_cache
CACHE_PLUGIN_TIMEOUT(/home/user/.ansible.cfg) = 7200
CONFIG_FILE() = /home/user/.ansible.cfg
DEFAULT_GATHERING(/home/user/.ansible.cfg) = smart
DEFAULT_HOST_LIST(/home/user/.ansible.cfg) = ['/home/user/.config/ansible/hosts']
INTERPRETER_PYTHON(/home/user/.ansible.cfg) = auto_silent
GALAXY_SERVERS:
OS / Environment
Debian 13 Trixie, arm64
Steps to Reproduce
- name: Config container
become_user: 'lxc_unprivileged_user'
become: true
community.general.lxc_container:
name: container_name
state: started
container_config:
- "lxc.net.0.type = veth"
- "lxc.net.0.link = lxcbr0"
- "lxc.net.0.flags = up"
- "lxc.net.0.ipv4.address = '10.0.0.3'"
- "lxc.net.0.ipv4.gateway = 'auto''"
Expected Results
Container config. No errors
Actual Results
TASK [Config container] ********************************************************
[ERROR]: Task failed: Module failed: The container [ container_name ] failed to start. Check to lxc is available and that the container is in a functional state.
Origin: /home/user/container_config:69:7
67 - 'lxc_image: {{ lxc_image }}'
68
69 - name: Config container
^ column 7
fatal: [machine]: FAILED! => {"changed": false, "error": "Failed to start container [ container_name ]", "lxc_container": {"init_pid": -1, "interfaces": [], "ips": [], "name": "matrix", "state": "stopped"}, "msg": "The container [ container_name ] failed to start. Check to lxc is available and that the container is in a functional state.", "rc": 1}
PLAY RECAP *********************************************************************
machine : ok=2 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
Container not start. If state=stoped or if container if started manually the playbook config and not show error. In the default state, started, the playbook crash at boot container.
Code of Conduct
- I agree to follow the Ansible Code of Conduct