Skip to content

[Feat_Add] Addition of new LLM evals metric #32

Open
@tarun-aiplanet

Description

@tarun-aiplanet

Beyond LLM supports, 4 evaluation metrics: Context relevancy, Answer relevancy, Groundedness, and Ground truth.

We would be looking forward to add new evaluation metric support to evaluate LLM/RAG response

  • Faithfulness
  • Correctness

Or any other research based metric

Metadata

Metadata

Labels

component;evaluateNew Evaluation metrics addition or modifications request for existing onesgood first issueGood for newcomershelp wantedExtra attention is needed

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions