Skip to content
This repository was archived by the owner on Aug 30, 2024. It is now read-only.

Commit abcc0f4

Browse files
authored
Update README.md (#83)
1 parent 12a17ee commit abcc0f4

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@ streamer = TextStreamer(tokenizer)
4141
model = AutoModelForCausalLM.from_pretrained(model_name, load_in_4bit=True)
4242
outputs = model.generate(inputs, streamer=streamer, max_new_tokens=300)
4343
```
44-
>**Note**: For llama2/ mistral/ neural_chat/ codellama/ magicoder models, we can only support the local path to model for now.
44+
>**Note**: For llama2/ mistral/ neural_chat/ codellama/ magicoder/ chatglmv1/v2/ baichuan models, we can only support the local path to model for now.
4545
GGUF format HF model
4646
```python
4747
from transformers import AutoTokenizer, TextStreamer

0 commit comments

Comments
 (0)