Skip to content

Converting docker images to apptainer

gwen-creatis edited this page Sep 26, 2025 · 1 revision

Converting docker images to apptainer

On infrastructures where Docker is not available, VIP converts docker images to apptainer using the following command:

apptainer build --fix-perms exemple-1.0 docker://docker.io/username/exemple:v1.0

Although most of the time the conversion is straightforward, we sometimes encounter difficulties. Those errors can come from using GPUs, having a modified model, using very specific libraries...etc


1. NVIDIA Driver and --nv Option

If you're trying to run a PyTorch code that relies on NVIDIA drivers when GPU functions are used, you may encounter this error when running the command apptainer run /home/user/exemple.sif .

Inference failed: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx

It happens because when switching to Apptainer, drivers are not detected automatically.

If you encounter this error, please check that you can find NVIDIA's driver as illustrated below:

Apptainer> nvidia-smi
Thu Jun 26 11:11:56 2025
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.183.06             Driver Version: 535.183.06   CUDA Version: 12.2     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA L40S                    Off | 00000000:64:00.0 Off |                    0 |
| N/A   32C    P8              32W / 350W |      0MiB / 46068MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+

+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|  No running processes found                                                           |

Then, you must explicitly pass the --nv flag to the container:

apptainer run --nv image.sif

Or in the boutiques :

"container-image":{
          "type": "singularity",
          "image": "image_example:2.0",
          "container-opts": ["--nv"]
  },

You can find more information about the --nv flag here : https://apptainer.org/docs/user/1.0/gpu.html


2. Relative vs Absolute Paths in Boutiques or Code

On the command-line described in the boutiques descriptor, please use:

  • Absolute paths when referring to execution files that are included in the image
  • Relative paths for the Inputs that are not included in the image, but mounted at runtime

Absolute paths for files included in the image:

For example, the following command line uses a relative path for the main.py file:

python3 scripts/main.py -i input -o OUTDIR

This can lead to the following error (that you will find in the application log):

python3: can't open file ‘/home/user/miccai/scripts/main.py': [Errno 2] No such file or directory

Please note that VIP overrides the WORKDIR at runtime. So even if you specify your WORKDIR when building the image, you should not rely on it. That's why you need to specify the absolute path like this in the boutiques descriptor command-line :

python3 /app_challenge/scripts/main.py -i input -o OUTDIR

Inputs and Relative paths:

The inputs that you'll give to your application (Data file, text file, csv...etc) won't be located in the home of the container but on VIP infrastructure and will be mounted on runtime on a path not known in advance. So we ask you to always use relative paths to use them.


3. Read-Only Filesystem in Apptainer

When using Apptainer, the container image is read-only by default.
This ensures reproducibility and prevents accidental modifications to system files, but it can lead to errors during the execution of your image.

Here are a few rules to follow when working on your application:

  1. Your application should not attempt to write on / or system directories when executing on VIP
    • Root filesystem (/) and system directories like /usr, /bin, /lib, /etc are read-only.
    • You cannot install libraries or dependencies here during container execution.
    • Attempting to write here will produce errors like:
mkdir: cannot create directory 'OUTDIR': Read-only file system
  1. Use the /tmp folder for temporary files written by your application

    • The /tmp folder is the only one that is guaranteed to be writable.
    • It's only for temporary files that are created and used during execution
  2. Do not use /tmp for installing dependencies when building your image

    • While /tmp is writable, it is meant only for use during execution, not for installing libraries or packages (see paragraph 4) when building the image.
    • Installing libraries should be done at build time in others directories, inside the container definition (.def, see paragraph 5) or Dockerfile.
  3. Advanced writable options (for VIP administrators )

    • Writable overlay layers: --overlay <dir>
      Adds a writable layer on top of the read-only image. It's the solution used by VIP.
    • Writable tmpfs: --writable-tmpfs
      Provides an in-memory writable layer (non-persistent, you will lost the change after the execution).

Summary table

Path / Type Writable? Notes
/ (root filesystem) Read-only by default; cannot install or modify system files
/usr, /bin, /lib, /etc Same as /
/tmp Always writable; use for temporary files during execution
$HOME ⚠️ Usually writable unless disabled by admin
Other user-writeable directories ⚠️ Writable if there is an appropriate overlay

4. Libraries Installed in /tmp Become Inaccessible

During the execution on VIP, we're using bosh exec launch with a mount to /tmp,

boshopts+=("-v" "$tmpfolder:/tmp")

So when running bosh exec launch, if there is already content in /tmp inside the container image, it becomes inaccessible and is replaced by the content of the mounted directory.

Impact: The container may work locally, but not once launched with bosh exec.

For example, if you install a library or dependencies in /tmp, they won’t be accessible during execution on VIP. However, you can still use /tmp to work and create files while the execution is running.

Recommendation: Don't install dependencies in /tmp.

Ensure they are installed elsewhere in the image.


5. Conda Environment Not Activated Automatically

In Apptainer, the Conda environment is not auto-activated when the container starts (unlike Docker).

Workaround in Boutiques:

You will need to add the following lines in the ~/.bashrc by using the Boutiques descriptor's command-line :

For example, if your env name is venv_sct and conda is located in /root/sct/python/etc/profile.d/conda.sh, the lines should be :

export PATH=/root/sct/bin:$PATH
echo "source /root/sct/python/etc/profile.d/conda.sh && conda activate venv_sct" >> /root/.bashrc

and in the boutiques descriptor :

"command-line": "export PATH=/root/sct/bin:$PATH && echo source /root/sct/python/etc/profile.d/conda.sh && conda activate venv_sct >> /root/.bashrc "

For the VIP administrator:

Workaround with apptainer and .def file :

Apptainer can use a example.def file during Build, this "example.def" is like a Dockerfile and we can add some command during build like a pip install or dnf install... etc

To build the .sif with the .def file, use this command :

apptainer build mon_app.sif mon_app.def

In our case, we want to add environment variable to our bashrc.

In the %environment section of the example.def, we can write the 2 previous line to activate the Conda environment during the image build.

# Minimal Apptainer Definition File Template
Bootstrap: docker
From: ubuntu:22.04

%labels
    Author Your Name
    Version 1.0
    Description "Minimal application container"

%environment
    export PATH="/opt/venv/bin:$PATH"
    export PYTHONPATH="/app:$PYTHONPATH"
    
    export PATH=/root/sct/bin:$PATH
    echo "source /root/sct/python/etc/profile.d/conda.sh && conda activate venv_sct" >> /root/.bashrc


%post
    # Install system dependencies
    apt-get update && apt-get install -y \
        python3 \
        python3-pip \
        python3-venv
    
    # Create virtual environment
    python3 -m venv /opt/venv
    
    # Install Python packages
    /opt/venv/bin/pip install numpy pandas

    # Create app directory
    mkdir /app

%runscript
    exec python3 /app/main.py "$@"

6. Missing nnU-Net Environment Variables

Some executions can be very slow because they do not use the GPU, without any clear error like the first paragraph. This is due to missing required environment variables for nnU-Net.

Set these variables before execution (you will first need to locate them in your installation) :

export nnUNet_raw=/path/to/folder/nnunet_raw
export nnUNet_preprocessed=/path/to/folder/nnunet_preprocessed
export nnUNet_results=/path/to/folder/nnunet_results

You will need to add these lines to the command line section of the Boutiques descriptor, for example if they were in the folder /workspace :

"command-line": "export nnUNet_raw=/workspace/nnunet_raw && export nnUNet_preprocessed=/workspace/nnunet_preprocessed && export nnUNet_results=/workspace/nnunet_results ...",

These are critical for proper functioning of the model and GPU support.

Depending of the model that is used by the application, you might need to research its documentation to find the correct variables.

7. Shell Implementations Across Distributions

Shell behavior can vary depending on the distribution, which can lead to subtle bugs — especially when relying on specific features.

If you happen to encounter an image that does not use bash, you may have to modify the boutiques so that the command lines are compatible with the image you use.

You should Prefer POSIX-compatible syntax (ie #!/bin/sh) if maximum portability is required.

Exemple :

  • Scripts using arrays or [[ ... ]] conditions
  • source instead of . will fail in dash.

For example, in source case you will encounter this :

source ~/.profile
=> source: not found.

Shell differences between distribution :

Shell Default On Key Characteristics
bash Most Linux distros Full-featured shell; supports arrays, process substitution, etc.
dash Debian, Ubuntu (/bin/sh) Very fast and lightweight; strictly POSIX-compliant; no arrays
sh Minimal containers, Alpine Often symlinked to dash or other minimal shells
bash CentOS, RHEL Sometimes older versions (e.g., 3.2) lacking modern features

Clone this wiki locally