Adding model for local inference #1362
Answered
by
awni
elijahrenner
asked this question in
Q&A
-
Hi, I'm using MLXLLM for inference on Llama 3.1 in my swift app. I want to use a higher-parameter model, but the In my app I have
which retrieves the model from
How do I add more models? |
Beta Was this translation helpful? Give feedback.
Answered by
awni
Apr 7, 2025
Replies: 1 comment 6 replies
-
If the model type is supported you can make a new |
Beta Was this translation helpful? Give feedback.
6 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
You can see a list of supported model types here.
Those model types correspond to the
model_type
field in theconfig.json
of the Hugging Face repo. See here for example.