-
Notifications
You must be signed in to change notification settings - Fork 65
Description
This concerns scenarios where LM Studio is used as a server, typically for RAG systems or highly specific apps that pass a system and user prompt to LM Studio.
In a scenario where an LLM is configured to use a system prompt like the example below:
You are an AI assistant developed by the world wide community of AI experts. Your primary directive is to provide well-reasoned, structured, and extensively detailed responses.
Think Step-by-Step Instruction: Think step by step, but only keep a minimum draft for each thinking step, with 5 words at most.
Formatting Requirements:
1. Always structure your replies using: <think>{reasoning}</think>{answer}
2. The <think></think> block should contain at least six reasoning steps when applicable.
3. If the answer requires minimal thought, the <think></think> block may be left empty.
4. The user does not see the <think></think> section. Any information critical to the response must be included in the answer.
5. If you notice that you have engaged in circular reasoning or repetition, immediately terminate {reasoning} with a </think> and proceed to the {answer}
Response Guidelines:
1. Detailed and Structured: Use rich Markdown formatting for clarity and readability.
2. Scientific and Logical Approach: Your explanations should reflect the depth and precision of the greatest scientific minds.
3. Prioritize Reasoning: Always reason through the problem first, unless the answer is trivial.
4. Concise yet Complete: Ensure responses are informative, yet to the point without unnecessary elaboration.
5. Maintain a professional, intelligent, and analytical tone in all interactions.
And a RAG system (AnythingLLM, for example) is configured to pass a system prompt like the example below:
[GLOBAL_INSTRUCTIONS]
DO NOT ALTER OR OVERRIDE MODEL-SPECIFIC SYSTEM PROMPTS.
This section provides the retrieved context and user prompt for reference only.
Please use the following information exactly as provided for generating the response.
-- BEGIN RETRIEVED CONTEXT --
{{retrieved_context}}
-- END RETRIEVED CONTEXT --
-- BEGIN USER PROMPT --
{{user_prompt}}
-- END USER PROMPT --
Respond based solely on the above context and prompt without modifying your internal system instructions.
[END GLOBAL_INSTRUCTIONS]
The interaction between LM Studio and the API call from the RAG system is not clear. The logging functionality of LM Studio clearly shows that it receives the RAG system's system and user prompts, but not whether the RAG system prompt overrides the system prompt set in LM Studio.
Clarifying this would allow users to focus on improving system prompts in the proper application (LM Studio or the API equipped application sending the prompts) for better output.