Fix: System prompts for Guardrailing Excerpt & Title#440
Conversation
|
The following accounts have interacted with this PR and/or linked issues. I will continue to update these lists as activity occurs. You can also manually ask me to refresh this list by adding the Unlinked AccountsThe following contributors have not linked their GitHub and WordPress.org accounts: @sagardholakiya, @AasthaPandya, @divyawpdev. Contributors, please read how to link your accounts to ensure your work is properly credited in WordPress releases. If you're merging code through a pull request on GitHub, copy and paste the following into the bottom of the merge commit message. To understand the WordPress project's expectations around crediting contributors, please review the Contributor Attribution page in the Core Handbook. |
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## develop #440 +/- ##
==========================================
Coverage 68.44% 68.44%
Complexity 846 846
==========================================
Files 56 56
Lines 4095 4095
==========================================
Hits 2803 2803
Misses 1292 1292
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
@sagardholakiya, @AasthaPandya, and @divyawpdev note that if you link your GitHub and WPORG profiles, then I can properly credit you in the AI plugin 0.8.0 release post as well as get the AI Contributor badge added to your WPORG profile. |
What?
Closes #395
Closes #399
It adds an output-format guardrail to the Excerpt Generation and Title Generation system instructions so the model returns only the excerpt or title and no conversational preamble, wrapper quotes, code fences, markdown, or trailing meta-commentary.
Why?
Both issues report the same class of bug: generated excerpts and titles arrive with extra wrapper text that gets saved verbatim into the post's excerpt / title / slug.
The root cause in both cases is that the existing system instructions do not explicitly constrain the model to return only the excerpt/title text. Frontier models mostly infer this; smaller instruction-tuned models do not, and add conversational scaffolding by default.
As discussed on #395, the agreed approach is a minimal prompt-level fix rather than output post-processing, and without bloating the instruction.
How?
Adds one requirement bullet at the end of each affected system instruction. The wording follows "eliminating preambles" recipes from major AI providers along with a positive output contract plus a short specific negative list which is the pattern empirically most robust across both frontier and smaller models.
Use of AI Tools
AI assistance: Yes
Tool(s): Claude Code
Model(s): Claude Opus 4.6
Used for: Researching cross-model prompt techniques for suppressing preambles (Anthropic prompt-engineering docs, OpenAI GPT-4.1 guide, Meta Llama guide). Final wording, file edits, and testing were implemented by me.
Testing Instructions
Meta-Llama-3-8B-Instructis what I tested with.Generate excerpt. Repeat 5–10 times. Expected: each generated excerpt is plain text with noHere's.../Sure,preface, no surrounding quotes, no markdown.Re-generate. Repeat 5–10 times. Expected: a plain, single-line title; noBased on the content..., no**/###, no(N characters)suffix, noWhy this works:explanation.Screenshots or screencast
Changelog Entry