Skip to content
This repository was archived by the owner on Oct 25, 2024. It is now read-only.
This repository was archived by the owner on Oct 25, 2024. It is now read-only.

How to config then let the Neural Chat work for chatglm3-6b? #1659

Open
@ahlwjnj

Description

@ahlwjnj

I have succeeded to run ./chatglm3-6b by Intel-extension-for-transformers on my laptop, but I try to use Nueral Chat to run the same model(./chatglm3-6b) then fail:
" Process finished with exit code 137 (interrapted by signal 9:SIGKILL)"

Code:
from intel_extension_for_transformers.neural_chat import build_chatbot, PipelineConfig
from intel_extension_for_transformers.transformers import RtnConfig
config = PipelineConfig(
model_name_or_path='./chatglm3-6b',
optimization_config=RtnConfig(
bits=4,
compute_dtype="int8",
weight_dtype="int4_fullrange"
)
)
chatbot = build_chatbot(config)
response = chatbot.predict(query="Hi")

CPU: I7-13700H
Memory: 16G
Ubuntu 22.04

Q1: How to config then let the Neural Chat work for chatglm3-6b ?

Q2: How to realize Server API based on the Q4 version model bin file (ne_chatglm2_q_nf4_bestla_cfp32_g32.bin) ?

Thanks a lot.

Metadata

Metadata

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions