-
Notifications
You must be signed in to change notification settings - Fork 905
Issues: huggingface/candle
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
[QUESTION] Protocol of adding a new model (Stella_en_<*>_v5 family) implementation with Candle
#2525
opened Oct 1, 2024 by
AnubhabB
CUDA_ERROR_UNSUPPORTED_PTX_VERSION when loading is_u32_f32
#2498
opened Sep 24, 2024 by
super-fun-surf
[Tracking] FLUX T5 XXL model produces NaN when on CUDA and using F16
#2480
opened Sep 16, 2024 by
EricLBuehler
Integrate new model speech to text model Fish Speech 1.4
#2472
opened Sep 12, 2024 by
jorgeantonio21
Example quantized with custom GGUF model error: cannot find llama.attention.head_count in metadata
#2450
opened Aug 27, 2024 by
evgenyigumnov
Error limit reached. 100 errors detected in the compilation of "src/unary.cu"
#2446
opened Aug 24, 2024 by
an1217
python sentence transformer all-MiniLM-L6-v2 is almost 2x faster than candle
#2418
opened Aug 15, 2024 by
AbhishekBose
Previous Next
ProTip!
What’s not been updated in a month: updated:<2024-09-05.