The world's first named AI prompt quality score — as an MCP server.
Score, optimize, and compare LLM prompts before they hit any model. Built on PEEM, RAGAS, G-Eval, and MT-Bench frameworks.
Add to your config (~/Library/Application Support/Claude/claude_desktop_config.json):
{
"mcpServers": {
"pqs": {
"command": "npx",
"args": ["pqs-mcp-server"]
}
}
}smithery mcp add onchaintel/pqsScore any prompt before it hits any model. Returns grade A-F, score out of 40, and percentile.
Example output:
{
"pqs_version": "1.0",
"prompt": "analyze this wallet",
"vertical": "crypto",
"score": 8,
"out_of": 40,
"grade": "D",
"upgrade": "Get full dimension breakdown at /api/score for $0.025 USDC via x402",
"powered_by": "PQS — pqs.onchainintel.net"
}Score AND optimize any prompt. Returns full 8-dimension breakdown + optimized version.
Requires: PQS API key (get one free at pqs.onchainintel.net)
Compare Claude vs GPT-4o on the same prompt. Judged by a third model. Returns winner, scores, and recommendation.
Requires: PQS API key (get one free at pqs.onchainintel.net)
Specify the domain context for more accurate scoring:
software— Software engineering, code, debuggingcontent— Content creation, copywriting, social mediabusiness— Business analysis, finance, strategyeducation— Education, research, academic writingscience— Scientific research, data analysiscrypto— Crypto trading, DeFi, onchain analysisgeneral— General purpose (default)
Use PQS as a pre-inference quality gate:
const score = await fetch("https://pqs.onchainintel.net/api/score/free", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ prompt: userPrompt, vertical: "software" })
});
const { score: pqsScore } = await score.json();
if (pqsScore < 28) throw new Error("Prompt quality too low — improve and retry");Grade D or below (< 28/40) means the prompt will waste inference spend.
John / OnChainIntel — @OnChainAIIntel
pqs.onchainintel.net