fix: render rubric value with vars before grading#8084
fix: render rubric value with vars before grading#8084Jah-yee wants to merge 1 commit intopromptfoo:mainfrom
Conversation
When llm-rubric assertion is defined in defaultTest, template variables
like {{myVar}} were not being resolved - the rubric received the literal
string {{myVar}} instead of the test case's variable value.
This fix renders the rubric value with nunjucks using the vars before
passing it to the grading LLM context.
There was a problem hiding this comment.
👍 All Clear
I reviewed the changes that render the rubric with Nunjucks variables before invoking the grading LLM. I traced inputs (rubric, vars, llmOutput) through prompt construction and the provider call and looked for exploit paths across the six LLM security classes. No medium-or-higher severity LLM security vulnerabilities introduced by this PR were found.
Minimum severity threshold: 🟡 Medium | To re-scan after changes, comment @promptfoo-scanner
Learn more
📝 WalkthroughWalkthroughThe Estimated code review effort🎯 2 (Simple) | ⏱️ ~12 minutes 🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Tip Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs). Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/matchers.ts`:
- Around line 637-642: The code re-renders string rubrics and stringifies object
rubrics in matchesLlmRubric, which corrupts literal {{...}} tokens and flattens
structured rubrics; remove the second nunjucks.renderString and stop using
JSON.stringify(rubric). Instead, use the incoming rubric value as-is: if rubric
is a string, use it directly (do not re-render) and if rubric is an object, keep
the object (do not stringify) so property access (rubric.foo) still works;
ensure both the local grading flow that sets renderedRubric and the remote path
that calls doRemoteGrading receive the raw rubric unchanged.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: a45f2bea-b15c-47c0-80ea-d5de99024f12
📒 Files selected for processing (1)
src/matchers.ts
| // Render the rubric value with vars so that template variables like {{myVar}} | ||
| // are expanded before being passed to the grading LLM. | ||
| const renderedRubric = | ||
| typeof rubric === 'string' | ||
| ? nunjucks.renderString(rubric, vars || {}) | ||
| : JSON.stringify(rubric); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify that llm-rubric values are already rendered before matchesLlmRubric
# and that the remote-grading branch still uses the raw rubric.
set -euo pipefail
echo "== matchesLlmRubric call site =="
rg -n -C3 'return matchesLlmRubric\(' src/assertions/llmRubric.ts
echo
echo "== upstream render of assertion string values =="
rg -n -C4 'renderedValue = nunjucks\.renderString\(renderedValue, resolvedVars\)' src/assertions/index.ts
echo
echo "== remote llm-rubric branch in matchesLlmRubric =="
rg -n -C4 "task: 'llm-rubric'|rubric,|vars: vars \|\| \{\}" src/matchers.tsRepository: promptfoo/promptfoo
Length of output: 2166
Remove the second Nunjucks render and avoid stringifying object rubrics.
String llm-rubric values are already Nunjucks-rendered before matchesLlmRubric is called (at src/assertions/index.ts:389), so the render at lines 641 is a second pass that can corrupt literal {{...}} text from vars. Additionally, JSON.stringify(rubric) flattens object rubrics, breaking structured property access like rubric.foo when object access is enabled. The remote-grading path (lines 622–632) bypasses this entirely by passing the raw rubric to doRemoteGrading, so any fix must address both branches consistently.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/matchers.ts` around lines 637 - 642, The code re-renders string rubrics
and stringifies object rubrics in matchesLlmRubric, which corrupts literal
{{...}} tokens and flattens structured rubrics; remove the second
nunjucks.renderString and stop using JSON.stringify(rubric). Instead, use the
incoming rubric value as-is: if rubric is a string, use it directly (do not
re-render) and if rubric is an object, keep the object (do not stringify) so
property access (rubric.foo) still works; ensure both the local grading flow
that sets renderedRubric and the remote path that calls doRemoteGrading receive
the raw rubric unchanged.
When llm-rubric assertion is defined in defaultTest, template variables like {{myVar}} were not being resolved - the rubric received the literal string {{myVar}} instead of the test case's variable value.
This fix renders the rubric value with nunjucks using the vars before passing it to the grading LLM context.
Fixes #7861