💫
Pinned Loading
-
intel/ipex-llm
intel/ipex-llm PublicAccelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, DeepSeek, Mixtral, Gemma, Phi, MiniCPM, Qwen-VL, MiniCPM-V, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discr…
-
kvcache-ai/ktransformers
kvcache-ai/ktransformers PublicA Flexible Framework for Experiencing Cutting-edge LLM Inference Optimizations
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.