Conversation
| } | ||
|
|
||
| if m["keywords"] == nil { | ||
| return "keywords are required", nil |
There was a problem hiding this comment.
keywords && content in input is not used in processor
| if !ok { | ||
| processor = textDetector[_defaultMode] | ||
| } | ||
| return &Tool{ |
There was a problem hiding this comment.
I understand that this logic implies that keywords and content are set during the initialization of the tool, which does not align with the expected behavior.keywords or content should be outputs generated by the LLM and should not be predefined or set at this stage.
There was a problem hiding this comment.
So should the assert of keywords be done by LLM?Actually I want to add an intervention method as tool.
There was a problem hiding this comment.
So should the assert of keywords be done by LLM?Actually I want to add an intervention method as tool.
I think that the keywords can be set by user, llm call the tool with content to do sensitive word detection.
If the keywords is given by llm, llm can do sensitive word detection self without calling tool
This sensitive word detection capability can be considered as a general method and placed in the utils package for users to use. It is not appropriate to use it as a tool for LLM to call. And it is good that you want to add a sensitive word feedback in feedback package, but please be careful not to set it as the default feedback |
|
I read the paper about self-refine.The generation of feedback based model's initial output and prompt,then pass the feedback to model for a refine output. |
This is a tool that can be used for agent output text detection, now providing sensitive word detection based on prefix trees.
If deemed appropriate, I also plan to integrate this capability into FeedBack, taking into account the need for sensitive word detection in agent output.