Skip to content

Conversation

AndlerRL
Copy link
Member

@AndlerRL AndlerRL commented Aug 13, 2025

Summary by Sourcery

Refine output instruction prompt to provide clearer guidelines for heading structure, list formatting, content analysis, language handling, table usage, search tool invocation, expertise scope, and miscellaneous formatting rules.

Enhancements:

  • Define hierarchical Markdown heading usage (H1–H4) without including literal labels in the text
  • Standardize list formatting with bolded labels followed by a colon and explanatory text
  • Enhance attachment analysis to extract metadata and conduct deep ‘Thread Context’ analysis with summaries, participants, open items, and prioritized recommendations
  • Implement placeholder and language handling by detecting the user’s primary language, prompting for clarification if needed, and auto-translating outputs when appropriate
  • Specify compact, labeled table usage for clarity in comparisons or matrices
  • Clarify search tool usage by invoking web_search_preview for recent data or stating the knowledge cutoff and offering updates
  • Refine scope limitations by providing brief guidance on out-of-expertise queries and recommending specialized Masterbots
  • Mandate inclusion of unique insights and forbid ‘Questions’ or ‘Answers’ labels in outputs

Summary by CodeRabbit

  • New Features

    • Automatic language detection and translation to the user’s primary language.
    • Enhanced attachment analysis with metadata extraction and deeper thread-context review.
    • Scoped responses for out-of-domain topics with referrals to relevant Masterbots.
    • Inclusion of a unique, lesser-known insight in final outputs.
  • Improvements

    • Standardized Markdown headings (H1–H4) and consistent bold-labeled lists.
    • Compact, labeled tables where they improve clarity.
    • Smarter placeholder handling and runtime replacement of user content.
    • Clear acknowledgment of knowledge cutoff and tool availability.
    • Cleaner outputs without “Questions/Answers” labels.

@AndlerRL AndlerRL self-assigned this Aug 13, 2025
Copy link

vercel bot commented Aug 13, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Preview Comments Updated (UTC)
masterbots Ready Ready Preview Comment Aug 26, 2025 1:17am

Copy link

sourcery-ai bot commented Aug 13, 2025

Reviewer's Guide

This PR refactors the system prompt for output instructions by replacing a concise bullet list with a comprehensive, structured set of guidelines covering markdown headings, list formatting, attachment analysis, language handling, tables, search tool usage, scope/referrals, and misc. developer requirements.

Class diagram for updated output instruction prompt structure

classDiagram
class setOutputInstructionPrompt {
  +Message setOutputInstructionPrompt(userContent: string)
}
class OutputInstructions {
  +Headings
  +ListsAndLabels
  +AttachmentsAnalysis
  +PlaceholderAndLanguageHandling
  +Tables
  +UseOfSearchTools
  +ScopeAndReferrals
  +Miscellaneous
}
setOutputInstructionPrompt --> OutputInstructions
Loading

File-Level Changes

Change Details Files
Refined markdown heading guidelines
  • Use H1–H4 hierarchies logically
  • Omit literal “H1/H2” labels in headings
  • Limit heading levels to what’s needed
apps/masterbots.ai/lib/constants/prompts.ts
Improved list and label formatting
  • Support bullet or numbered lists for clarity
  • Bold item labels with colon syntax
apps/masterbots.ai/lib/constants/prompts.ts
Enhanced attachments analysis
  • Extract metadata (author, date, type, sections)
  • Deeply analyze if filename includes “Thread Context”: summarize, list participants, identify open items, recommend next actions
apps/masterbots.ai/lib/constants/prompts.ts
Clarified placeholder and language handling
  • Replace ${userContent} with detected language or ask for clarification if missing
  • Translate output to user’s primary language unless overridden
apps/masterbots.ai/lib/constants/prompts.ts
Defined table usage guidelines
  • Use compact, labeled tables for comparisons or feature matrices
  • Include tables only when they enhance clarity
apps/masterbots.ai/lib/constants/prompts.ts
Specified web search tool usage
  • Use web_search_preview for info beyond the knowledge cutoff when available
  • Otherwise state cutoff and offer to fetch updates
apps/masterbots.ai/lib/constants/prompts.ts
Scoped referrals for out-of-expertise queries
  • Explain limitations for questions outside designated expertise
  • Provide high-level guidance and recommend specialized Masterbots
apps/masterbots.ai/lib/constants/prompts.ts
Added miscellaneous developer requirements
  • Always include the unique lesser-known insight
  • Avoid labels like “Questions” or “Answers” in final output
apps/masterbots.ai/lib/constants/prompts.ts

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link
Contributor

coderabbitai bot commented Aug 13, 2025

Walkthrough

Rewrote the system prompt for setOutputInstructionPrompt into structured sections covering headings, lists, attachment analysis, placeholders/language handling, tables, search tool usage, scope/referrals, and miscellaneous output rules. Function signature unchanged; only internal prompt content updated.

Changes

Cohort / File(s) Summary
Prompt restructuring
apps/masterbots.ai/lib/constants/prompts.ts
Replaced general formatting guidelines with a structured, categorized protocol detailing headings, lists, attachments metadata/thread-context analysis, placeholder and translation behavior, table usage, conditional search tooling, scope/referrals, and miscellaneous output rules. No API/signature changes.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~15 minutes

Possibly related PRs

Suggested labels

enhancement, difficulty:hard

Suggested reviewers

  • Bran18

Poem

I thump my paw at structured light,
Headings hop in tidy height.
Lists with labels, bold and neat,
Tables nibble compact, sweet.
Threads and contexts, carrots aligned—
Translate the garden, language-kind.
One rare leaf: a fresh new find. 🥕🐇

Tip

🔌 Remote MCP (Model Context Protocol) integration is now available!

Pro plan users can now connect to remote MCP servers from the "Integrations" page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch impr-output-instruction

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey there - I've reviewed your changes and they look great!


Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

@coderabbitai coderabbitai bot changed the title chore: upt output instructions details | @coderabbitai chore: upt output instructions details | Refactor setOutputInstructionPrompt system prompt with no API change Aug 13, 2025
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (3)
apps/masterbots.ai/lib/constants/prompts.ts (3)

221-222: Make search-tool guidance tool-agnostic

Referencing a specific tool name (“web_search_preview”) can drift from actual availability/configuration. Keep this guideline generic to reduce maintenance and mismatches with runtime tools.

Apply this diff:

-		- If the user requests recent information beyond the assistant's knowledge cutoff and the web_search_preview tool is available, use it. Otherwise, state the knowledge cutoff and offer to fetch updates if tools are available.
+		- If the user requests recent information beyond the assistant's knowledge cutoff and a web search tool is available, use it. Otherwise, state the knowledge cutoff and offer to fetch updates if tools are available.

209-209: Clarify heading syntax to avoid literal “H1/H2” outputs

Although you warn not to write “H1/H2” literally, models sometimes still do. Specifying Markdown tokens (#, ##, ###, ####) minimizes ambiguity.

-		- Use Markdown headings hierarchically: H1 for the main title, H2 for major sections, H3 for subsections, H4 for detail-level notes/examples. Only include as many levels as needed; do not literally write "H1/H2" in the heading text.
+		- Use Markdown heading syntax hierarchically (# for the main title, ## for major sections, ### for subsections, #### for detail notes/examples). Only include as many levels as needed; do not include the literal words "H1/H2/H3/H4" in headings.

226-226: Ambiguous directive: “unique lesser-known insight requested by the developer”

The instruction assumes an external developer-provided signal that may not exist in context. This can confuse the model or force it to fabricate an “insight.”

  • If such a signal exists elsewhere, reference it explicitly (e.g., a flag in the system prompt or config).
  • Otherwise, reword to make it conditional and self-contained:
-		- Always include the unique lesser-known insight requested by the developer.
+		- When appropriate, include one unique, lesser-known insight that is accurate and relevant. Omit if none apply.
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 37fac5d and 623e62f.

📒 Files selected for processing (1)
  • apps/masterbots.ai/lib/constants/prompts.ts (1 hunks)
🔇 Additional comments (1)
apps/masterbots.ai/lib/constants/prompts.ts (1)

227-227: LGTM: avoiding “Questions/Answers” labels in final output

This aligns with the rest of the prompt set (e.g., examplesPrompt) and helps keep answers clean.

Comment on lines 216 to 218
- Placeholder and language handling:
- Replace ${userContent} with the detected primary language/content the user provided. If it is empty, ask the user to clarify. Use language detection to determine translation needs.
- Translate the final output to the user's primary language when the user's prompt or attachments are in a language other than English, unless the user requests otherwise.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix template interpolation: userContent is injected into the system prompt (prompt-injection risk) and likely unintended

The template string interpolates ${userContent} directly into the system instructions, which:

  • Leaks raw user input into the system prompt, enabling prompt injection into system-level rules.
  • Makes the sentence nonsensical (it reads as “Replace [actual user content]...” rather than referencing a literal placeholder).

Escape the placeholder so it remains literal, or generalize the instruction to avoid referencing a specific placeholder.

Apply one of the following diffs (preferred: Option A).

Option A — keep placeholder literal and clarify intent:

-		- Replace ${userContent} with the detected primary language/content the user provided. If it is empty, ask the user to clarify. Use language detection to determine translation needs.
+		- Replace \${userContent} (if present as a literal placeholder) with the user's provided content and maintain the user's primary language. If it is empty, ask the user to clarify. Use language detection to determine translation needs.

Option B — generalize and remove placeholder reference:

-	- Placeholder and language handling:
-		- Replace ${userContent} with the detected primary language/content the user provided. If it is empty, ask the user to clarify. Use language detection to determine translation needs.
-		- Translate the final output to the user's primary language when the user's prompt or attachments are in a language other than English, unless the user requests otherwise.
+	- Placeholder and language handling:
+		- Preserve and use the user's primary language. If no user content is provided, ask for clarification. Use language detection to determine translation needs.
+		- Translate the final output to the user's primary language when the user's prompt or attachments are not in English, unless the user requests otherwise. If detection is uncertain, ask a brief clarification.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- Placeholder and language handling:
- Replace ${userContent} with the detected primary language/content the user provided. If it is empty, ask the user to clarify. Use language detection to determine translation needs.
- Translate the final output to the user's primary language when the user's prompt or attachments are in a language other than English, unless the user requests otherwise.
- Placeholder and language handling:
- Replace \${userContent} (if present as a literal placeholder) with the user's provided content and maintain the user's primary language. If it is empty, ask the user to clarify. Use language detection to determine translation needs.
- Translate the final output to the user's primary language when the user's prompt or attachments are in a language other than English, unless the user requests otherwise.
🤖 Prompt for AI Agents
In apps/masterbots.ai/lib/constants/prompts.ts around lines 216 to 218, the
system prompt currently interpolates ${userContent} directly which both risks
prompt-injection and renders the instruction nonsensical; update the template so
the placeholder remains literal or remove the specific placeholder reference.
Fix by either escaping the dollar/curly sequence so the text contains the
literal "${userContent}" (no runtime interpolation) and clarify it refers to a
placeholder for detected user language/content, or rewrite the sentence to a
generalized instruction such as "Replace the detected primary language/content
the user provided; if empty, ask the user to clarify. Use language detection..."
ensuring no runtime string interpolation occurs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant