Need Advice/Experiences: Using Local AI Models (on 8 GB GPU) #293
Unanswered
Gerkinfeltser
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hey everyone,
I’m working with local AI models, specifically the 7b variants of OpenHermes and Mistral, using an 8 GB GPU. Mistral has been a bit more pliant but honestly, it’s been challenging. The models really aren’t adhering to the patterns as I expected, and I’m trying to figure out why.
I’m reaching out to see if anyone has insights or advice:
Appreciate any help you can offer!
Beta Was this translation helpful? Give feedback.
All reactions