Description
Describe the bug
I installed the runner on an external volume. When I tried to start it with launchctl (underlying command: /Volumes/WD/github-runner-00.studio.internal/runsvc.sh
), I got an "operation not permitted" error.
#1958 suggested giving full disk access to /bin/bash
and /bin/sh
. In my case it wasn't enough. I had to give full disk access to node binary too, e.g. /Volumes/WD/github-runner-00.studio.internal/externals/node20/bin/node
to make it work. This problem also happens when any build step accesses an external volume, for example, putting go build cache on external volume.
It works great until the auto update kicks in. Each runner update comes with a new node binary, e.g. /Volumes/WD/github-runner-00.studio.internal/externals.2.322.0/node20/bin/node
. The runner won't start until I give the new node binary full disk access.
I'd really prefer not having to deal with this for 10 runners each time after an update. I am running the Mac headless, and don't always have access to the GUI.
To Reproduce
Steps to reproduce the behavior:
- Attach an external hard disk to Mac
- Provision a runner as per GitHub documentation
- Start the runner
runsvc.sh
and you should see the permission error - Give the node binary coming with the runner
Full Disk Access
in System Preferences->Security & Privacy->Full Disk Access - Start the runner again, it should work this time.
I don't know how to trigger a runner software update; but problem caused by the update should be easy to perceive if above behavior is reproducible.
Expected behavior
After a runner was provisioned on an external volume and Full Disk Access
permission was given, the runner should continue to work after auto update.
Runner Version and Platform
Version of your runner?
2.322.0
OS of the machine running the runner?
OSX Sonoma 14.6.1
What's not working?
Please include error messages and screenshots.
Job Log Output
If applicable, include the relevant part of the job / step log output here. All sensitive information should already be masked out, but please double-check before pasting here.
Runner and Worker's Diagnostic Logs
If applicable, add relevant diagnostic log information. Logs are located in the runner's _diag
folder. The runner logs are prefixed with Runner_
and the worker logs are prefixed with Worker_
. Each job run correlates to a worker log. All sensitive information should already be masked out, but please double-check before pasting here.
Activity