Skip to content

[Feature Request] Provide example of evaluator_config for evaluate #3203

Open
@pamelafox

Description

@pamelafox

I suspect that I need to specify evaluator_config for evaluate in order to map the data from the target response, but there's no example of it in the docstring or in https://pypi.org/project/promptflow-evals/0.2.0.dev0/

Can you provide an example either in the docstring or the promptflow-evals docs?

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions