What's preventing the use of GPU in docker containers? #854
Unanswered
AnkushMalaker
asked this question in
Q&A
Replies: 2 comments 5 replies
-
|
That's awesome to hear! What OS are you running on? And do you see GPU utilization spike when offline chat model responds? Few things that needed to be done were:
|
Beta Was this translation helpful? Give feedback.
5 replies
-
|
Something information that might be useful - I used another program fish which supports using GPU for training+inference in Docker. It seems to come down to:
I tried doing the same thing. However, |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I was able to give access to my GPU by modifying the docker-compose file and using
nvidia/cuda:12.1.1-runtime-ubuntu22.04as the base image.torch.cuda.is_available() is also True.
What's preventing the LLM from utilizing the GPU that'd work otherwise with local install, according to the offline + gpu instructions?
Beta Was this translation helpful? Give feedback.
All reactions