You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Capturing user feedback is critical for understanding the real-world quality of your GenAI application. MLflow's **Feedback API** provides a structured, standardized approach to collecting, storing, and analyzing user feedback directly within your traces.
14
-
15
-
## Why Use MLflow Feedback for User Feedback?
16
-
17
-
<FeatureHighlightsfeatures={[
18
-
{
19
-
icon: Target,
20
-
title: "Direct Trace Integration",
21
-
description: "Feedback is linked directly to specific application executions, creating an immediate connection between user reactions and system performance."
22
-
},
23
-
{
24
-
icon: Shield,
25
-
title: "Structured Data Model",
26
-
description: "Standardized format with clear attribution and rationale ensures consistent feedback collection across your entire application."
27
-
},
28
-
{
29
-
icon: BarChart3,
30
-
title: "Production Ready",
31
-
description: "Available in OSS MLflow 3.2.0+ with no external dependencies, designed for high-throughput production environments."
32
-
},
33
-
{
34
-
icon: ThumbsUp,
35
-
title: "Complete Audit Trail",
36
-
description: "Track every feedback change with timestamps and user attribution, enabling comprehensive quality analysis over time."
37
-
}
38
-
]} />
39
-
40
-
## Step-by-Step Guide: Collecting User Feedback
41
-
42
-
### 1. Set Up Your GenAI Application with Tracing
43
-
44
-
First, create a simple application that automatically generates traces using MLflow's OpenAI autologging:
9
+
Capturing user feedback is critical for understanding the real-world quality of your GenAI application. MLflow's Feedback API provides a structured, standardized approach to collecting, storing, and analyzing user feedback directly within your traces.
# Equivalent to log_feedback(trace_id="<trace_id>", name=feedback.name, value=feedback.value, ...)"
111
60
```
112
61
113
-
### 3. View Feedback in MLflow UI
114
-
115
-
After collecting feedback, you can view it in the MLflow UI:
116
-
117
-
<ImageBox
118
-
src={AssessmentsTraceDetailImageUrl}
119
-
alt="Feedback in MLflow UI"
120
-
width="90%"
121
-
/>
122
-
123
-
The trace detail page shows all feedback attached to your traces, making it easy to analyze user satisfaction and identify patterns in your application's performance.
124
-
125
-
### 4. Adding and Updating Feedback via UI
126
-
127
-
Users can also provide feedback directly through the MLflow UI:
128
-
129
-
**Creating New Feedback:**
62
+
## Supported Value Types
130
63
131
-
<ImageBox
132
-
src={AddFeedbackImageUrl}
133
-
alt="Create Feedback"
134
-
width="90%"
135
-
/>
136
-
137
-
**Adding Additional Feedback:**
138
-
139
-
<ImageBox
140
-
src={AdditionalFeedbackImageUrl}
141
-
alt="Additional Feedback"
142
-
width="90%"
143
-
/>
64
+
MLflow feedback supports various formats to match your application's needs:
144
65
145
-
This collaborative approach enables both programmatic feedback collection and manual review workflows.
66
+
| Feedback Type | Description | Example Use Cases |
description: "Use MLflow's boolean feedback type for simple thumbs up/down collection. Once you analyze patterns with MLflow's search APIs, expand to numeric ratings or structured feedback types."
165
-
},
166
-
{
167
-
icon: Clock,
168
-
title: "Link Feedback to Fresh Traces",
169
-
description: "Collect feedback immediately after trace generation when the interaction context is available. MLflow's direct trace-feedback linkage ensures you always have the full execution context."
170
-
},
171
-
{
172
-
icon: Database,
173
-
title: "Use Consistent Naming Conventions",
174
-
description: "Standardize feedback names like 'user_satisfaction' or 'quality_rating' across traces. This enables MLflow's search and aggregation features to provide meaningful insights across your application."
175
-
},
176
-
{
177
-
icon: Lock,
178
-
title: "Use Source Attribution Properly",
179
-
description: "Set meaningful source_id values in AssessmentSource objects for tracking feedback providers. MLflow preserves complete audit trails with timestamps and source attribution."
180
-
},
181
-
{
182
-
icon: Users,
183
-
title: "Combine Programmatic and UI Collection",
184
-
description: "Use MLflow's API for automated collection and the UI for manual review. Both methods integrate seamlessly, allowing different teams to contribute feedback through their preferred interface."
0 commit comments