-
Notifications
You must be signed in to change notification settings - Fork 123
Open
Description
Hi ! Congrats for the work
I was wondering if you’ve experimented with quantizing the model to reduce GPU memory usage, particularly during inference. Have you run any tests or benchmarks in this direction?
Also, do you think quantization could be feasible and effective for this model without causing a significant performance drop?
Metadata
Metadata
Assignees
Labels
No labels