-
Notifications
You must be signed in to change notification settings - Fork 10
Open
Description
Can you introduce how to perform per-token quantization on o_proj and down_proj exactly?
https://github.com/AniZpZ/AutoSmoothQuant/blob/main/autosmoothquant/layers/nn/linear.py#L310
int8_weight, weight_scale = quantize_per_tensor_absmax(module.weight)
if act_quant == "per-token":
alpha = weight_scalewhen using per-token, the weight_scale is from quantize_per_tensor_absmax, this is a bit confusing for me.
Metadata
Metadata
Assignees
Labels
No labels