-
Notifications
You must be signed in to change notification settings - Fork 182
fix: incorrect aggregate pipelines construction MCP-351 #833
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR fixes issues with the aggregate tool's pipeline construction and output messaging. The changes ensure proper handling of write operations ($merge, $out) without applying limits, correct detection of search stages, and improved result messaging that avoids redundancy when document counts match.
Key changes:
- Special-case handling for $merge and $out stages to prevent applying limits after write operations
- Enhanced search stage detection to include $search and $searchMeta alongside $vectorSearch
- Refined message generation to avoid redundancy when returned document count equals total document count
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 3 comments.
| File | Description |
|---|---|
| src/tools/mongodb/read/aggregate.ts | Added early return path for write pipelines, fixed search stage detection logic, and refactored message generation to conditionally append document count details |
| tests/integration/tools/mongodb/read/aggregate.test.ts | Updated expected messages to remove redundancy, added tests for $limit and $out stages, refactored limit tests with helper function, and removed duplicate message expectations |
src/tools/mongodb/read/aggregate.ts
Outdated
| if (pipeline.some((stage) => "$out" in stage || "$merge" in stage)) { | ||
| // This is a write pipeline, so special-case it and don't attempt to apply limits or caps | ||
| aggregationCursor = provider.aggregate(database, collection, pipeline); | ||
| const results = await aggregationCursor.toArray(); | ||
| return { | ||
| content: formatUntrustedData( | ||
| "The aggregation pipeline executed successfully", | ||
| ...(results.length > 0 ? [EJSON.stringify(results)] : []) | ||
| ), | ||
| }; | ||
| } |
Copilot
AI
Jan 7, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The early return for $out/$merge pipelines bypasses the telemetry emission that occurs later in the function. This creates inconsistent telemetry reporting where write operations won't be tracked. Consider extracting the telemetry logic into a helper function and calling it before this return, or restructuring to ensure all execution paths emit telemetry.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
could we decouple the two different ways of doing this into different methods where you get the result and then follow same logic to return the result? this also will help us understand this business logic in the long-term
himanshusinghs
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice catch!
Proposed changes
Fixes a few quirks with the
aggregatetool:$mergeor$out$searchand$searchMetastages