Skip to content

How to use *chat_template* with .gguf models ? (tokenizer_name not implemented) #1999

@Bobchenyx

Description

@Bobchenyx

Hi,
I'm currently facing this tokenizer_name NotImplementedError while testing quantized .ggufmodel with [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness)

I'm having this trouble with --apply_chat_template

run command lm_eval --model gguf --model_args base_url=http://127.0.1.1:8080 --tasks gsm8k --output_path result/gsm8k --log_samples --apply_chat_template --fewshot_as_multiturn

How to implement this tokenizer_name? (I'm running with python3 -m llama_cpp.server)

Image

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions