-
-
Notifications
You must be signed in to change notification settings - Fork 931
Closed
Labels
documentationImprovements or additions to documentationImprovements or additions to documentationenhancementNew feature or requestNew feature or requestpythonPull requests that update python codePull requests that update python code
Description
Summary
We've created an MLflow integration that uses Instructor/Pydantic for structured output validation in LLM evaluation workflows. This allows MLflow users to validate LLM outputs against Pydantic schemas without additional LLM calls.
MLflow PR: mlflow/mlflow#20628
What This Enables
MLflow users can now use Pydantic schemas as evaluation scorers:
from pydantic import BaseModel
from mlflow.genai.scorers.instructor import SchemaCompliance, FieldCompleteness
class UserInfo(BaseModel):
name: str
email: str
age: int
# Validate LLM outputs against schema
scorer = SchemaCompliance()
feedback = scorer(
outputs={"name": "John", "email": "john@example.com", "age": 30},
expectations={"schema": UserInfo},
)
print(feedback.value) # "yes" or "no"Scorers Implemented
| Scorer | Purpose |
|---|---|
SchemaCompliance |
Validates output matches Pydantic schema |
FieldCompleteness |
Checks required fields are present and non-null |
TypeValidation |
Verifies field types match schema definitions |
ConstraintValidation |
Checks Pydantic validators/constraints pass |
ExtractionAccuracy |
Compares extracted fields against ground truth |
Request
Would you be interested in:
- Adding MLflow to your integrations/ecosystem documentation?
- Any feedback on the integration approach?
Happy to collaborate on documentation or improvements.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
documentationImprovements or additions to documentationImprovements or additions to documentationenhancementNew feature or requestNew feature or requestpythonPull requests that update python codePull requests that update python code