-
Notifications
You must be signed in to change notification settings - Fork 77
Description
We noticed an issue when using container actions that are using the musl libc implementation.
In v.0.8.0 an initContainer is used to copy some files (e.g. tail) from the actions-runners:latest container into an emptyDir volume.
This emptyDir volume gets mounted on the main container in your step pod.
initContainers:
- command:
- bash
- -c
- sudo cp $(which sh) /mnt/externals/sh && sudo cp $(which tail) /mnt/externals/tail &&
sudo cp $(which env) /mnt/externals/env && sudo chmod -R 777 /mnt/externals
image: ghcr.io/actions/actions-runner:latestWhen investigating what went wrong we noticed the following error.
❯ kubectl logs -n gha-runner-scale-sets --all-containers default-staging-kgk55-runner-lj8tf-step-167c4602
exec /__e/tail: no such file or directory
This occurs because the libc in our main container (my-company/my-repo/tm-commitlint:2.0.0) is based on musl. While the libc implementation in the initContainer (which the tail command is copied from) is based on the GNU C library.
# This is executed from within the step pod
/__e $ ldd /__e/tail 2>&1
/lib64/ld-linux-x86-64.so.2 (0x7f27d561c000)
libc.so.6 => /lib64/ld-linux-x86-64.so.2 (0x7f27d561c000)
Error relocating /__e/tail: __fdelt_chk: symbol not found
Error relocating /__e/tail: __memcpy_chk: symbol not found
Error relocating /__e/tail: __printf_chk: symbol not found
Error relocating /__e/tail: error: symbol not found
Error relocating /__e/tail: __fprintf_chk: symbol not foundAs you can see, the tail that is present in the container (but not used by the container-hooks implementation) is based on musl.
# This is executed from within the step pod
/__e $ which tail
/usr/bin/tail
/__e $ ldd $(which tail) 2>&1
/lib/ld-musl-x86_64.so.1 (0x7f13dec06000)
libc.musl-x86_64.so.1 => /lib/ld-musl-x86_64.so.1 (0x7f13dec06000)I think this can be a big issue because it would mean that any container action that is based on the musl libc implementation is not supported.
Copying over these binaries will always result in this issue.
Perhaps a more elegant solution is to require the main container to have a set of minimum binaries. For example tail. That way the tail binary that is native to the container can be used. The only small change that people would have to do is to make sure that these minimum binaries are included in their container.
The workflow that is used:
name: Attempt test staging
on:
workflow_dispatch:
jobs:
staging-test2:
name: Container action via staging runner
runs-on: default-staging
container: my-registry/tm-gha-base:v3 # just an ubuntu based container running as root
steps:
- name: Run tm-actions-commit-lint
uses: MyCompnay/tm-actions-commit-lint@v1This is the action.yml of the called tm-actions-commit-lint.
---
name: Commit Lint Action
description: |
Lints commit messages using commitlint.
For push events, it lints the latest commit.
For pull request events, it lints all commits from base to head.
It is important to note that this action needs to be run in a job where your repository is checked out.
The action requires your full git history to be available. You can do that by using the 'fetch-depth: 0' option.
- name: Checkout repository
uses: actions/[email protected]
with:
fetch-depth: 0
inputs:
event_name:
description: "GitHub event name"
default: ${{ github.event_name }}
base_sha:
description: "Base SHA for pull request events"
default: ${{ github.event.pull_request.base.sha }}
head_sha:
description: "Head SHA for pull request events"
default: ${{ github.event.pull_request.head.sha }}
runs:
using: docker
entrypoint: /entrypoint.sh
image: "docker://my-company/my-repo/tm-commitlint:2.0.0"
args:
- ${{ inputs.event_name }}
- ${{ inputs.base_sha }}
- ${{ inputs.head_sha }}