Skip to content

Feat: The tool of text detection#5

Open
Feizekai wants to merge 2 commits intoantgroup:mainfrom
Feizekai:main
Open

Feat: The tool of text detection#5
Feizekai wants to merge 2 commits intoantgroup:mainfrom
Feizekai:main

Conversation

@Feizekai
Copy link
Contributor

This is a tool that can be used for agent output text detection, now providing sensitive word detection based on prefix trees.
If deemed appropriate, I also plan to integrate this capability into FeedBack, taking into account the need for sensitive word detection in agent output.

}

if m["keywords"] == nil {
return "keywords are required", nil
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

keywords && content in input is not used in processor

if !ok {
processor = textDetector[_defaultMode]
}
return &Tool{
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand that this logic implies that keywords and content are set during the initialization of the tool, which does not align with the expected behavior.keywords or content should be outputs generated by the LLM and should not be predefined or set at this stage.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So should the assert of keywords be done by LLM?Actually I want to add an intervention method as tool.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So should the assert of keywords be done by LLM?Actually I want to add an intervention method as tool.

I think that the keywords can be set by user, llm call the tool with content to do sensitive word detection.

If the keywords is given by llm, llm can do sensitive word detection self without calling tool

@tyloafer
Copy link
Collaborator

tyloafer commented Feb 6, 2025

This is a tool that can be used for agent output text detection, now providing sensitive word detection based on prefix trees. If deemed appropriate, I also plan to integrate this capability into FeedBack, taking into account the need for sensitive word detection in agent output.

This sensitive word detection capability can be considered as a general method and placed in the utils package for users to use. It is not appropriate to use it as a tool for LLM to call.

And it is good that you want to add a sensitive word feedback in feedback package, but please be careful not to set it as the default feedback

@Feizekai
Copy link
Contributor Author

Feizekai commented Feb 6, 2025

I read the paper about self-refine.The generation of feedback based model's initial output and prompt,then pass the feedback to model for a refine output.
I understand that sensitive word could be write in prompt rather than set as a default feedback output.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants