Skip to content

SELinux behaves completely differently when I podman run vs. bootc switch my container image #439

Open
@stefwalter

Description

@stefwalter

The behavior of SELinux seems completely different when I run my bootable container under podman run vs deploying it (for example via bootc switch).

I think that SELinux works differently for containers (which it treats as a single security context) than it does with hosts (very fine grained). This means many commands such as WORKDIR or RUN in my Containerfile have unexpected results.

The following Containerfile doesn't work (as an example):

FROM quay.io/centos-bootc/centos-bootc:stream9

#Substitute YOUR public key for the below-private key holder for the following public key will have root access
# podman build --build-arg="SSHPUBKEY=$(cat $HOME/.ssh/id_rsa.pub)" ...
ARG SSHPUBKEY
RUN mkdir /usr/etc-system && \
    echo 'AuthorizedKeysFile /usr/etc-system/%u.keys' >> /etc/ssh/sshd_config.d/30-auth-system.conf && \
    echo $SSHPUBKEY > /usr/etc-system/root.keys && chmod 0600 /usr/etc-system/root.keys

WORKDIR /locallm/models
RUN curl -LO https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGUF/resolve/main/llama-2-7b-chat.Q5_K_S.gguf

WORKDIR /locallm
RUN dnf -y install pip gcc-c++ python3-devel cmake
COPY requirements.txt /locallm/requirements.txt
RUN pip install --upgrade pip
RUN CMAKE_ARGS="-DLLAMA_NATIVE=off" FORCE_CMAKE=1 pip install --no-cache-dir --upgrade -r /locallm/'llama-cpp-python[server]'

COPY run.sh run.sh
COPY run.service /etc/systemd/system/
RUN ln -s /etc/systemd/system/run.servince /etc/systemd/system/multi-user.target.wants/run.service

# The following steps should be done in the bootc image.
CMD [ "/sbin/init" ]
STOPSIGNAL SIGRTMIN+3
RUN rpm --setcaps shadow-utils

With the following run.service file:

[Unit]
Description=Run LLama

[Service]
ExecStart=/locallm/run.sh /locallm/models/llama-2-7b-chat.Q5_K_S.gguf

[Install]
WantedBy=multi-user.target

And the following run.sh file:

#!/bin/bash

MODEL_PATH=${MODEL_PATH:=$1}

if [ ${CONFIG_PATH} ] || [[ ${MODEL_PATH} && ${CONFIG_PATH} ]]; then
    python -m llama_cpp.server --config_file ${CONFIG_PATH}
    exit 0
fi

if [ ${MODEL_PATH} ]; then
    python -m llama_cpp.server --model ${MODEL_PATH} --host ${HOST:=0.0.0.0} --port ${PORT:=8001} --n_gpu_layers ${GPU_LAYERS:=0} --clip_model_path ${CLIP_MODEL_PATH:=None} --chat_format ${CHAT_FORMAT:="llama-2"}
    exit 0
fi

echo "Please set either a CONFIG_PATH or a MODEL_PATH"
exit 1

This works when running as an application container, but fails when running as a bootable container, due to completely different SELinux models.

Activity

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions