-
Notifications
You must be signed in to change notification settings - Fork 3.7k
Description
Issue Checklist
- I understand that issues are for feedback and problem solving, not for complaining in the comment section, and will provide as much information as possible to help solve the problem.
- My issue is not listed in the FAQ.
- I've looked at pinned issues and searched for existing Open Issues, Closed Issues, and Discussions, no similar issue or discussion was found.
- I've filled in short, clear headings so that developers can quickly identify a rough idea of what to expect when flipping through the list of issues. And not "a suggestion", "stuck", etc.
- I've confirmed that I am using the latest version of Cherry Studio.
Platform
macOS
Version
v1.7.19
Bug Description
The "Raptor mini" GitHub Copilot model is leaking its internal reasoning process and raw tool execution syntax (XML tags) directly into the chat interface. Instead of presenting a clean response, the model's "thoughts" and the technical output of builtin_memory_search are visible to the end user. This breaks the abstraction layer and clutters the UI with debugging information that should be handled in the background.
Steps To Reproduce
- Select the "Raptor mini | GitHub Copilot" model
- Provide a prompt that triggers a memory search or internal reasoning (e.g., asking about a specific term that might have been mentioned previously)
- Observe the output as the model processes the request
Expected Behavior
Internal reasoning (Chain-of-Thought) and tool execution XML should be handled in the background. The user should only see the final generated response. If tool status needs to be shown, it should be via a dedicated UI element (e.g., a loading spinner or a "Searching memory..." status indicator), not raw text and tags.
Relevant Log Output
Additional Context
No response
