Skip to content

Adding Llama.cpp support with quantized models#16

Draft
ArthurCamara wants to merge 3 commits intocastorini:mainfrom
ArthurCamara:llama_cpp
Draft

Adding Llama.cpp support with quantized models#16
ArthurCamara wants to merge 3 commits intocastorini:mainfrom
ArthurCamara:llama_cpp

Conversation

@ArthurCamara
Copy link

Adding support to Llama.cpp with quantized models.

8-bit model: https://huggingface.co/castorini/rank_vicuna_7b_v1_q8_0/
4-bit model: https://huggingface.co/castorini/rank_vicuna_7b_v1_q4_0/

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant