Skip to content

[FEATURE] Optional Evaluation Targets/Evaluators #206

@lsy357

Description

@lsy357

📋 Please ensure that:

  • I have searched existing issues to avoid duplicates
  • I have provided a clear problem statement and solution
  • I understand this is a feature request and not a bug report
  • I am willing to help implement this feature if needed
  • I have submitted this feature request in English (otherwise it will not be processed)

🎯 Problem Statement

There is already model output content to be evaluated, making it impossible to directly complete the evaluator’s assessment through experiment.

💡 Proposed Solution

When launching an experiment, users can choose to skip evaluation targets or evaluators depending on the scenario.

📋 Use Cases

User has already obtained the model output and is unable to carry out the evaluation.

⚡ Priority

None

🔧 Component

None

🔄 Alternatives Considered

No response

🎨 Mockups/Designs

No response

⚙️ Technical Details

No response

✅ Acceptance Criteria

No response

📝 Additional Context

No response

Metadata

Metadata

Assignees

Labels

enhancementNew feature or request

Type

No type

Projects

No projects

Relationships

None yet

Development

No branches or pull requests

Issue actions