Skip to content

Commit add3a8f

Browse files
committed
feat(docker): add new Dockerfiles, HCL bake config, and ansible playbook
Add the docker-new pipeline foundation: base, core, and universe Dockerfiles with multi-stage builds, docker-bake.hcl for buildx bake orchestration, ansible playbook with tag-based role selection, entrypoint with UID/GID remapping, and CycloneDDS config. Images build locally with `docker buildx bake -f docker-new/docker-bake.hcl`. No existing files or workflows are affected. Part of #7003. Signed-off-by: Mete Fatih Cırıt <mfc@autoware.org>
1 parent 1e0efff commit add3a8f

9 files changed

Lines changed: 657 additions & 0 deletions

File tree

.hadolint.yaml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3,3 +3,5 @@ ignored:
33
- DL3013 # Pin versions in pip. Instead of `pip install <package>`, use `pip install <package>==<version>`
44
- DL3015 # Avoid additional packages by specifying `--no-install-recommends`
55
- DL3009 # Delete the apt-get lists after installing something
6+
- DL3002 # Last USER should not be root (multi-stage builds need root in intermediate stages)
7+
- DL3004 # Do not use sudo (our images use passwordless sudo by design)
Lines changed: 45 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,45 @@
1+
- name: Autoware development environment
2+
hosts: localhost
3+
connection: local
4+
vars:
5+
rosdistro:
6+
pre_tasks:
7+
- name: Print configuration
8+
ansible.builtin.debug:
9+
msg:
10+
- rosdistro: "{{ rosdistro }}"
11+
tags: [always]
12+
13+
roles:
14+
- role: autoware.dev_env.rmw_implementation
15+
tags: [base, core, universe, rmw]
16+
17+
- role: autoware.dev_env.build_tools
18+
tags: [core, universe, ccache]
19+
20+
- role: autoware.dev_env.dev_tools
21+
tags: [core, universe, dev_tools]
22+
23+
- role: autoware.dev_env.ros2_dev_tools
24+
tags: [core, universe, ros2_dev_tools]
25+
26+
- role: autoware.dev_env.acados
27+
tags: [universe, acados]
28+
29+
- role: autoware.dev_env.geographiclib
30+
tags: [universe, geographiclib]
31+
32+
- role: autoware.dev_env.qt5ct_setup
33+
tags: [universe, qt5ct_setup]
34+
35+
- role: autoware.dev_env.cuda
36+
tags: [universe, nvidia, cuda]
37+
38+
- role: autoware.dev_env.tensorrt
39+
tags: [universe, nvidia, tensorrt]
40+
41+
- role: autoware.dev_env.spconv
42+
tags: [universe, nvidia, spconv]
43+
44+
- role: autoware.dev_env.artifacts
45+
tags: [universe, artifacts]

docker-new/README.md

Lines changed: 127 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,127 @@
1+
# Run Autoware in Docker
2+
3+
## Image Graph
4+
5+
```mermaid
6+
graph TD
7+
base(["base"])
8+
base --> core-dependencies(["core-dependencies"])
9+
core-dependencies --> core-devel(["core-devel"])
10+
core-devel --> universe-dependencies(["universe-dependencies"])
11+
universe-dependencies --> universe-dependencies-cuda(["universe-dependencies-cuda"])
12+
universe-dependencies --> universe-devel(["universe-devel"])
13+
universe-dependencies-cuda --> universe-devel-cuda(["universe-devel-cuda"])
14+
base --> core(["core"])
15+
core-devel -- " COPY /opt/autoware " --> core
16+
core --> universe-runtime-dependencies(["universe-runtime-dependencies"])
17+
universe-runtime-dependencies --> universe(["universe"])
18+
universe-runtime-dependencies --> universe-cuda(["universe-cuda"])
19+
universe-devel -- " COPY /opt/autoware " --> universe
20+
universe-devel-cuda -- " COPY /opt/autoware " --> universe-cuda
21+
classDef base fill: #e8e8e8, color: #333
22+
classDef devel fill: #bbdefb, color: #333
23+
classDef runtime fill: #c8e6c9, color: #333
24+
classDef cuda fill: #e1bee7, color: #333
25+
class base base
26+
class core-dependencies,core-devel,universe-dependencies,universe-devel devel
27+
class core,universe-runtime-dependencies,universe runtime
28+
class universe-dependencies-cuda,universe-devel-cuda,universe-cuda cuda
29+
```
30+
31+
## Images
32+
33+
| Image | Description | Use case |
34+
| ------------------------------- | ------------------------------------------------------------------------------------ | ---------------------------------------------------------- |
35+
| `base` | ROS base, sudo, pipx, ansible, RMW, user `aw` | Foundation for all other images |
36+
| `core-dependencies` | Build deps + compiled core packages (except autoware_core and autoware_rviz_plugins) | CI for autoware_core |
37+
| `core-devel` | Adds autoware_core build on top of core-dependencies | Development and CI for packages depending on autoware_core |
38+
| `core` | Runtime-only: rosdep exec deps + compiled core from core-devel | Lightweight core runtime |
39+
| `universe-dependencies` | Ansible universe roles + rosdep build deps for all of autoware | CI for autoware_universe |
40+
| `universe-dependencies-cuda` | Adds CUDA, TensorRT, spconv dev libs | CI for CUDA-dependent packages |
41+
| `universe-devel` | Builds all universe sources (no CUDA) | Development without GPU |
42+
| `universe-devel-cuda` | Builds all universe sources with CUDA | Development with GPU |
43+
| `universe-runtime-dependencies` | Runtime ansible roles + rosdep exec deps | Foundation for final runtime images |
44+
| `universe` | Runtime image with compiled autoware (no CUDA) | Deployment without GPU |
45+
| `universe-cuda` | Runtime image with compiled autoware + CUDA runtime libs | Deployment with GPU |
46+
47+
## Build locally
48+
49+
From the repository root. Targets beyond `base` require source repositories under `src/`:
50+
51+
```bash
52+
# Clone source repositories (needed for core and universe targets)
53+
vcs import src < repositories/autoware.repos
54+
55+
# Build all default targets (universe + universe-cuda)
56+
docker buildx bake -f docker-new/docker-bake.hcl
57+
58+
# Build a specific target (dependencies are resolved automatically)
59+
docker buildx bake -f docker-new/docker-bake.hcl base
60+
docker buildx bake -f docker-new/docker-bake.hcl core-devel
61+
docker buildx bake -f docker-new/docker-bake.hcl universe
62+
docker buildx bake -f docker-new/docker-bake.hcl universe-cuda
63+
64+
# Build for humble
65+
ROS_DISTRO=humble docker buildx bake -f docker-new/docker-bake.hcl base
66+
```
67+
68+
## Usage
69+
70+
```bash
71+
xhost +local:docker
72+
73+
docker run --rm -it \
74+
--net host \
75+
--privileged \
76+
--gpus all \
77+
-e DISPLAY=$DISPLAY \
78+
-e NVIDIA_DRIVER_CAPABILITIES=all \
79+
-e NVIDIA_VISIBLE_DEVICES=all \
80+
-e HOST_UID=$(id -u) \
81+
-e HOST_GID=$(id -g) \
82+
-e QT_X11_NO_MITSHM=1 \
83+
-v /tmp/.X11-unix:/tmp/.X11-unix:rw \
84+
-v $HOME/autoware_map:/home/aw/autoware_map \
85+
-v $HOME/autoware_data:/home/aw/autoware_data \
86+
-v $HOME/autoware:/home/aw/autoware \
87+
-w /home/aw/autoware \
88+
--runtime=nvidia \
89+
autoware:universe-cuda-jazzy \
90+
bash -c "source /opt/autoware/setup.bash && exec bash"
91+
```
92+
93+
| Flag | Why |
94+
| ----------------------------------- | ---------------------------------------------------------------------------------------------------- |
95+
| `--rm` | Remove container on exit to avoid accumulating stopped containers |
96+
| `-it` | Interactive terminal (stdin + TTY) |
97+
| `--net host` | Share host network stack so ROS 2 nodes can discover each other |
98+
| `--privileged` | Access to host devices (sensors, CAN bus, etc.) |
99+
| `--gpus all` | Expose all GPUs to the container |
100+
| `-e DISPLAY` | Forward X11 display for GUI applications (rviz2, rqt) |
101+
| `-e NVIDIA_DRIVER_CAPABILITIES=all` | Enable all NVIDIA driver features (compute, graphics, video) |
102+
| `-e NVIDIA_VISIBLE_DEVICES=all` | Make all GPUs visible inside the container |
103+
| `-e HOST_UID/HOST_GID` | Entrypoint remaps the `aw` user to match host UID/GID, avoiding permission issues on mounted volumes |
104+
| `-e QT_X11_NO_MITSHM` | Disable MIT-SHM for Qt apps (shared memory doesn't work across container boundary) |
105+
| `-v /tmp/.X11-unix` | Mount X11 socket for GUI forwarding |
106+
| `-v autoware_map` | Mount map data from host |
107+
| `-v autoware_data` | Mount perception model data from host |
108+
| `-v autoware` | Mount source code for development |
109+
| `-w /home/aw/autoware` | Set working directory to the mounted source |
110+
| `--runtime=nvidia` | Use NVIDIA container runtime for GPU support |
111+
112+
Or run without volume mounting:
113+
114+
```bash
115+
docker run --rm -it \
116+
--net host \
117+
autoware:core-jazzy
118+
```
119+
120+
The default CycloneDDS config uses the `lo` interface (localhost only). To override it, mount your own config:
121+
122+
```bash
123+
docker run --rm -it \
124+
--net host \
125+
-v /path/to/your/cyclonedds.xml:/home/aw/cyclonedds.xml \
126+
autoware:universe-cuda-jazzy
127+
```

docker-new/base.Dockerfile

Lines changed: 63 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,63 @@
1+
# check=skip=InvalidDefaultArgInFrom
2+
ARG ROS_DISTRO
3+
4+
FROM ros:${ROS_DISTRO}-ros-base AS base
5+
SHELL ["/bin/bash", "-o", "pipefail", "-c"]
6+
7+
ARG ROS_DISTRO
8+
ARG USERNAME=aw
9+
10+
RUN rm -f /etc/apt/apt.conf.d/docker-clean && \
11+
echo 'Binary::apt::APT::Keep-Downloaded-Packages "true";' > /etc/apt/apt.conf.d/keep-cache && \
12+
echo 'APT::Install-Recommends "false";' > /etc/apt/apt.conf.d/99-no-recommends && \
13+
echo 'APT::Install-Suggests "false";' >> /etc/apt/apt.conf.d/99-no-recommends
14+
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
15+
--mount=type=cache,target=/var/lib/apt/lists,sharing=locked \
16+
apt-get update && \
17+
apt-get install -y --no-install-recommends \
18+
sudo \
19+
pipx \
20+
bash-completion \
21+
gosu
22+
23+
# Remove default ubuntu user (present since 24.04, occupies UID 1000)
24+
RUN userdel -r ubuntu 2>/dev/null || true && \
25+
useradd -m -s /bin/bash -U ${USERNAME} && \
26+
echo "${USERNAME} ALL=(ALL) NOPASSWD:ALL" >/etc/sudoers.d/90-user-nopasswd && \
27+
chmod 0440 /etc/sudoers.d/90-user-nopasswd && \
28+
sed -i 's/^#force_color_prompt=yes/force_color_prompt=yes/' /home/${USERNAME}/.bashrc
29+
30+
USER ${USERNAME}
31+
WORKDIR /home/${USERNAME}
32+
33+
# Make pipx shims visible during build steps and at runtime
34+
ENV PATH="/home/${USERNAME}/.local/bin:${PATH}"
35+
36+
ENV ANSIBLE_COLLECTIONS_PATH="/home/${USERNAME}/.ansible/collections"
37+
38+
# hadolint ignore=DL3003
39+
RUN --mount=type=bind,source=ansible-galaxy-requirements.yaml,target=/tmp/ansible/ansible-galaxy-requirements.yaml \
40+
--mount=type=bind,source=ansible,target=/tmp/ansible/ansible \
41+
--mount=type=cache,target=/var/cache/apt,sharing=locked \
42+
--mount=type=cache,target=/var/lib/apt/lists,sharing=locked \
43+
--mount=type=cache,target=/home/aw/.cache/pipx,uid=1000,gid=1000 \
44+
pipx install --include-deps "ansible==10.*" && \
45+
cd /tmp/ansible && \
46+
ansible-galaxy collection install -f -r ansible-galaxy-requirements.yaml && \
47+
ansible-playbook autoware.dev_env.autoware_requirements \
48+
--tags rmw \
49+
-e rosdistro=${ROS_DISTRO} && \
50+
pipx uninstall ansible
51+
52+
COPY docker-new/files/cyclonedds.xml /home/${USERNAME}/cyclonedds.xml
53+
ENV CYCLONEDDS_URI=file:///home/${USERNAME}/cyclonedds.xml
54+
55+
# Entrypoint runs as root so it can adjust UID/GID, then drops to user
56+
USER root
57+
COPY --chmod=755 docker-new/docker-entrypoint.sh /docker-entrypoint.sh
58+
59+
ENV ROS_DISTRO=${ROS_DISTRO}
60+
ENV USERNAME=${USERNAME}
61+
62+
ENTRYPOINT ["/docker-entrypoint.sh"]
63+
CMD ["/bin/bash"]

docker-new/core.Dockerfile

Lines changed: 102 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,102 @@
1+
# syntax=docker/dockerfile:1
2+
# check=skip=InvalidDefaultArgInFrom
3+
ARG BASE_IMAGE
4+
5+
FROM ${BASE_IMAGE} AS core-dependencies
6+
SHELL ["/bin/bash", "-o", "pipefail", "-c"]
7+
8+
USER ${USERNAME}
9+
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
10+
--mount=type=cache,target=/var/lib/apt/lists,sharing=locked \
11+
--mount=type=cache,target=/home/aw/.cache/pip,uid=1000,gid=1000 \
12+
--mount=type=cache,target=/home/aw/.cache/pipx,uid=1000,gid=1000 \
13+
pipx install --include-deps "ansible==10.*" && \
14+
ansible-playbook autoware.dev_env.autoware_requirements \
15+
--tags core \
16+
--skip-tags base \
17+
-e "rosdistro=${ROS_DISTRO}" && \
18+
pipx uninstall ansible
19+
USER root
20+
21+
ENV CC="/usr/lib/ccache/gcc"
22+
ENV CXX="/usr/lib/ccache/g++"
23+
ENV CCACHE_DIR="/home/aw/.ccache"
24+
25+
COPY --parents --chown=${USERNAME}:${USERNAME} src/core/**/package.xml /tmp/autoware/
26+
RUN rm -rf /tmp/autoware/src/core/autoware_core /tmp/autoware/src/core/autoware_rviz_plugins
27+
28+
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
29+
--mount=type=cache,target=/var/lib/apt/lists,sharing=locked \
30+
apt-get update && \
31+
. "/opt/ros/${ROS_DISTRO}/setup.sh" && \
32+
rosdep install -y --from-paths /tmp/autoware/src/core \
33+
--ignore-src \
34+
--rosdistro "${ROS_DISTRO}" \
35+
--dependency-types=build \
36+
--dependency-types=build_export \
37+
--dependency-types=buildtool \
38+
--dependency-types=buildtool_export \
39+
--dependency-types=test
40+
41+
RUN --mount=type=bind,source=src/core,target=/tmp/autoware/src/core,rw \
42+
--mount=type=cache,target=/home/aw/.ccache,uid=1000,gid=1000 \
43+
rm -rf /tmp/autoware/src/core/autoware_core \
44+
/tmp/autoware/src/core/autoware_rviz_plugins && \
45+
. "/opt/ros/${ROS_DISTRO}/setup.sh" && \
46+
colcon build \
47+
--base-paths /tmp/autoware/src/core \
48+
--install-base /opt/autoware \
49+
--cmake-args -DCMAKE_BUILD_TYPE=Release && \
50+
rm -rf build log
51+
52+
FROM core-dependencies AS core-devel
53+
54+
COPY --parents --chown=${USERNAME}:${USERNAME} \
55+
src/core/autoware_core/**/package.xml \
56+
src/core/autoware_rviz_plugins/**/package.xml \
57+
/tmp/autoware/
58+
59+
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
60+
--mount=type=cache,target=/var/lib/apt/lists,sharing=locked \
61+
apt-get update && \
62+
. "/opt/ros/${ROS_DISTRO}/setup.sh" && \
63+
. /opt/autoware/setup.sh && \
64+
rosdep install -y --from-paths /tmp/autoware/src/core \
65+
--ignore-src \
66+
--rosdistro "${ROS_DISTRO}" \
67+
--dependency-types=build \
68+
--dependency-types=build_export \
69+
--dependency-types=buildtool \
70+
--dependency-types=buildtool_export \
71+
--dependency-types=test
72+
73+
RUN --mount=type=bind,source=src/core/autoware_core,target=/tmp/autoware/src/core/autoware_core \
74+
--mount=type=bind,source=src/core/autoware_rviz_plugins,target=/tmp/autoware/src/core/autoware_rviz_plugins \
75+
--mount=type=cache,target=/home/aw/.ccache,uid=1000,gid=1000 \
76+
. "/opt/ros/${ROS_DISTRO}/setup.sh" && \
77+
. /opt/autoware/setup.sh && \
78+
colcon build \
79+
--base-paths /tmp/autoware/src/core/autoware_core \
80+
/tmp/autoware/src/core/autoware_rviz_plugins \
81+
--install-base /opt/autoware \
82+
--cmake-args -DCMAKE_BUILD_TYPE=Release && \
83+
rm -rf build log
84+
85+
FROM ${BASE_IMAGE} AS core
86+
ENV AUTOWARE_RUNTIME=1
87+
88+
COPY --from=core-devel /opt/autoware /opt/autoware
89+
RUN find /opt/autoware -name '*.so' -exec strip --strip-unneeded {} +
90+
91+
COPY --parents src/core/**/package.xml /tmp/
92+
93+
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
94+
--mount=type=cache,target=/var/lib/apt/lists,sharing=locked \
95+
apt-get update && \
96+
. "/opt/ros/${ROS_DISTRO}/setup.sh" && \
97+
sudo apt-get install -y "ros-${ROS_DISTRO}-topic-tools" && \
98+
rosdep install -y --from-paths /tmp/src/core \
99+
--dependency-types=exec \
100+
--ignore-src \
101+
--rosdistro "${ROS_DISTRO}" && \
102+
rm -rf /tmp/src

0 commit comments

Comments
 (0)