-
-
Notifications
You must be signed in to change notification settings - Fork 115
Release v2.2.0 #220
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Release v2.2.0 #220
Conversation
amakropoulos
commented
Aug 26, 2024
•
edited
Loading
edited
- closes Hot-swap LoRA with updated llama.cpp #212
- closes The editor crashes when exiting playmode while it is creating the LLM service #171
- closes [Regression] Custom Model Path not working anymore #206
…99 to avoid enabling editor, handle overwrite plugin
Does this mean that the Lora weights are fixed once the server starts? In that case hot-swapping is not really allowed, as one cannot change the adapters weights after the server started. I think the current design in llama.cpp for hot swapping is to:
So, say, characters with different adapters share the same Let me know if I am misunderstanding something! |
@ltoniazzi ah no I just didn't phrase it properly. To use a specific lora for a character, one would call the SetLoraWeight multiple times at the moment (0 for all except the character lora with 1). |