Loading Tokenizer from GGUF model #2941
yvonneprice
started this conversation in
General
Replies: 2 comments 2 replies
-
|
Yeah I was wondering the same thing, they also have the templates built in as well. I think quantized models aren't a priority for them, which is understandable. |
Beta Was this translation helpful? Give feedback.
1 reply
-
|
seams like a good and useful feature. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I'm writing a small proof-of-concept of command-line chatbot to learn more about Candle and Rust that uses local LLMs for inference.
From the examples I have seen, it seems like we need to provide the path of the tokenizer file. However, reading this thread in mistral.rs, it seems that it is not necessary, a GGUF file has the information necessary to create a tokenizer. In fact, mistral.rs implemented this: EricLBuehler/mistral.rs#345
I haven't been able to find anything similar in Candle. Did I overlook this, or Candle does not have it? If Candle doesn't have it, do you have plans to add it? Would you accept a contribution? (I'm new to both Candle and Rust, but I'd like to give this a try)
Thanks!
Beta Was this translation helpful? Give feedback.
All reactions