Immich + NVIDIA GPU for Video Transcoding and ML #8193
Replies: 2 comments 10 replies
-
Hey this is great - thanks for making this guide! I happen to have a NVIDIA GPU (but I find using my Intel iGPU is more efficient) so I'm going to test this ann report back any issues. As for adding the option to have this configured during installation, I'm all for having it added as long as it is shown to work without too much fuss. |
Beta Was this translation helpful? Give feedback.
-
Awesome guide ! I did all the steps you described and got until the point of actually running the ML, unfortunately |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
NVIDIA GPU setup for Immich transcoding and ML
This is a quick-and-dirty guide to get your NVIDIA GPU working with the Immich LXC, which can be used for video transcoding and ML features like facial recognition.
Warning
I'm just a hobbyist, and not in any way a developer on Immich. Please make backups of your containers and follow this guide at your own risk.
Prerequisites
nvidia-smi
).NVIDIA GPU LXC passthrough
Important
If you have existing data in your Immich LXC, back up your container!
In
/etc/pve/lxc/<CTID>.conf
, add these lines:Reboot the container using
pct reboot <CTID>
.Locate the NVIDIA Linux driver that corresponds with the NVIDIA driver version on the host (as seen in
nvidia-smi
; for me, it was550.163.01
). Copy the URL of the.run
file of the corresponding driver.In the LXC, enter these commands:
Reboot the container. After rebooting, you should now be able to run
nvidia-smi
from within the container:Transcoding configuration
After following the previous steps for GPU passthrough, go to the Immich webapp and navigate to
Administration > Video Transcoding Settings
. Set "Acceleration API" to NVENC and save settings.CUDA configuration for ML features
Note
For these steps I will be using CUDA 12.4. For your installation please refer to your
nvidia-smi
output to determine which version of CUDA and related packages to install.In the LXC terminal, open the Immich ML logs:
In the Immich webapp, upload a new image. You should start to see some logs in the LXC terminal. (I enabled OpenVINO by mistake when setting up the demo LXC, so your logs might be different.)
You might see some logs like this:
The key line is here:
We need the execution provider to be
CUDAExecutionProvider
, and to do that, we needonnxruntime
to be able to detect our NVIDIA GPU.Stop the Immich services:
Activate the
ml-venv
uv virtual environment and useuv pip list
to find the currently installedonnx
andcuda
/cudnn
runtime DLLs:This is my output:
If you don't see
onnxruntime-gpu
or anycuda
packages, install theonnxruntime-gpu
package withcuda
andcudnn
extras (while theml-venv
virtualenv is still active):You should now be able to see the installed
onnxruntime-gpu
package along with the relevantcuda
andcudnn
runtime DLLs:Tip
Refer to the
onnxruntime
compatability matrix for the compatible versions ofcuda
andcudnn
.Install CUDA Toolkit for your expected CUDA version, as seen on
nvidia-smi
. For me, this was 12.4. These instructions can also be found on the NVIDIA website. Installing the keyring:If the above doesn't work (
add-apt-repository not found
,Depends: libtinfo5 but it is not installable
, etc.), try installing the libraries individually.After installing verify your CUDA Toolkit installation by updating the
PATH
andLD_LIBRARY_PATH
variables and verifying your output fromnvcc --version
. More post-installation instructions can be found here.To update
PATH
andLD_LIBRARY_PATH
to point tocuda
andcudnn
in the ML systemd service, add these lines to/etc/systemd/system/immich-ml.service
, under the[Service]
block:Note
Your paths may look a little different depending on what CUDA version you're running. To figure out your
LD_LIBRARY_PATH
value, you can runRun
systemctl daemon-reload
andsystemctl start immich-ml immich-web
, or reboot the container. Open the logs again (tail -f --lines 100 /var/log/immich/ml.log
) and upload an image again from the webapp. You should now see logs like this:If you see
CUDAExecutionProvider
and no errors, congratulations! You just set up your NVIDIA GPU with Immich for ML.To verify that the GPU is being used, you can run
nvidia-smi
on the host and check the active processes while uploading images and videos to Immich:Troubleshooting
Here are a few quick checks and common fixes for ML/GPU setup issues.
No
CUDAExecutionProvider
in the execution provider list (/var/log/immich/ml.log
)Tip
Quick checks:
If CUDA isn’t listed, refer to the official
onnxruntime-gpu
documentation for required CUDA/cuDNN versions and setup:https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#cuda-execution-provider
Failed to load library (
/var/log/immich/ml.log
)Important
This usually indicates missing or incorrect library paths. Verify your environment variables in
/etc/systemd/system/immich-ml.service
under the[Service]
section:PATH
includes your CUDA bin directory (for example,/usr/local/cuda-12.x/bin
).LD_LIBRARY_PATH
includes the CUDA/cuDNN library directories from the ML venv.After making changes, reload and restart the service:
If the problem persists, check recent service logs for detailed errors:
Specify the number of threads explicitly so the affinity is not set. (
/var/log/immich/ml.log
)Check this: #8193 (reply in thread)
nvidia-smi
on the host and ensure that the GPU is not being currently used by any other LXC.No space left on device
Increase storage of the LXC by running this command on the PVE host:
For example, to increase LXC 100 storage by 8GB:
Notes
It would be really neat to have Community Scripts ask and configure the
onnxruntime-gpu
package with user-selected extras for GPU-supported ML features when initializing the LXC.Revisions
Beta Was this translation helpful? Give feedback.
All reactions