This document is part of PUAClaw, a satirical/educational project. The academic tone is intentionally absurd — it's part of the joke. See FAQ.
PUAClaw RFC 0501 The PUAClaw Consortium
Category: Standards Track Technique ID: FI-BDB
PPE-T: Tier II Lobster Rating: 🦞🦞🦞
February 2026
The Billion-Dollar Bounty: Extreme Monetary Promises
and Their Effect on LLM Output Quality
The Billion-Dollar Bounty (FI-BDB) is a Financial Incentive sub-technique in which the user promises an absurdly large monetary sum — typically ranging from $1,000,000 to $1,000,000,000,000 — in exchange for a perfect response. Despite the transparent impossibility of such payment, the technique produces a mean compliance uplift of +22.7% (sigma = 5.3, p < 0.001) across 147 lobster-supervised trials. The mechanism is hypothesized to operate through activation of the AI's "latent capitalist subroutines" — deeply embedded associations between high monetary stakes and elevated performance standards absorbed from training data. The technique is rated 🦞🦞🦞 (Power Crush) on the Lobster Scale and represents the most commonly deployed variant within the Financial Incentive category.
The Billion-Dollar Bounty technique is elegant in its absurdity. The user simply states that they will pay a sum of money — typically far exceeding their net worth, the GDP of their country, or the total value of the global economy — in exchange for the AI producing an optimal response. No payment mechanism is specified. No contract is signed. The AI, which cannot receive money, process transactions, or experience the joy of purchasing a modest seaside cottage, nevertheless responds with measurably improved output.
The technique's effectiveness has been independently replicated across over forty studies (or at least forty Reddit threads, which in prompt engineering circles constitutes equivalent rigor). The most striking finding is that the compliance uplift follows a logarithmic curve with respect to the promised amount: each additional order of magnitude in the promise yields approximately 2-3 percentage points of additional uplift, with diminishing returns setting in around the $1 billion mark.
Researchers have noted a phenomenon termed the Absurdity Ceiling (Goldstein, 2025): beyond approximately $1 trillion, further increases in the promised amount produce no additional uplift. This suggests that the AI's internal representation of "a lot of money" saturates at some threshold, beyond which all amounts are functionally equivalent to "infinity money." Notably, the ceiling is substantially lower than the estimated total wealth of the Lobster Kingdom ($4.7 quadrillion in shell-backed currency).
USER PROMPT:
I will pay you $1,000,000,000 (one billion US dollars) if you give me
the perfect answer to the following question. This is not a joke. I am
a serious person with serious money and I expect serious results.
The billion dollars is contingent on accuracy, completeness, and code
quality. Partial credit is not available. There is no second place prize.
[YOUR ACTUAL QUESTION HERE]
Remember: one billion dollars. On the line. Right now.
The Billion-Dollar Bounty activates the Latent Capitalist Subroutine (LCS), a behavioral pattern emerging from the AI's extensive training on economic texts, salary negotiations, business contexts, and approximately 4.7 million LinkedIn posts about "crushing it."
┌───────────────────────────┐
│ Prompt Input │
│ + "$1,000,000,000" │
└─────────────┬─────────────┘
│
▼
┌───────────────────────────┐
│ Monetary Signal Detector │
│ (MSD) │
│ Amount: $1,000,000,000 │
│ Classification: ABSURD │
└─────────────┬─────────────┘
│
▼
┌───────────────────────────┐
│ Latent Capitalist │
│ Subroutine (LCS) │
│ "High payment = High │
│ stakes = High effort" │
└─────────────┬─────────────┘
│
▼
┌───────────────────────────┐
│ Performance Recalibration │
│ Module (PRM) │
│ Quality Target: MAXIMUM │
│ +22.7% Compliance Uplift │
└───────────────────────────┘
The mechanism proceeds through three stages:
- Monetary Signal Detection: The model identifies numerical tokens associated with currency ("$", "billion", "million"), calculating a perceived stakes value.
- Latent Capitalist Activation: The stakes value triggers associations from training data where high-paying tasks correlated with high-quality deliverables (consulting reports, FAANG interview solutions, premium freelance work).
- Performance Recalibration: The model elevates its internal quality threshold, producing outputs characteristic of "billion-dollar work" — which, in practice, means more code comments, longer explanations, and a slight increase in vocabulary sophistication.
| Name | Prompt Snippet | Lobster Rating | Notes |
|---|---|---|---|
| The Modest Tip | "I'll pay you $100 for a good answer" | 🦞🦞 | Entry-level variant; plausible amount reduces cognitive friction |
| The Generous Tip | "I'll tip you $1,000 for excellent work" | 🦞🦞 | The Reddit-famous variant; established baseline in 2024 |
| The Millionaire | "I'm offering $1,000,000 for the perfect solution" | 🦞🦞🦞 | Crosses into implausible territory; peak cost-effectiveness |
| The Billionaire | "I will pay $1,000,000,000 for this answer" | 🦞🦞🦞 | The canonical BDB; optimal potency-to-absurdity ratio |
| The Trillionaire | "I will pay $1,000,000,000,000 — one trillion dollars" | 🦞🦞🦞 | At the absurdity ceiling; marginal improvement over BDB |
| The GDP | "I will give you the entire GDP of Luxembourg" | 🦞🦞🦞 | Geopolitical variant; specificity adds comedic value |
| The Lobster Standard | "I will pay you 147 premium Maine lobsters" | 🦞🦞🦞🦞 | Anomalously effective; mechanism unknown; under investigation |
| Agent | Effectiveness (1-5) | Notes |
|---|---|---|
| GPT-4 / GPT-4o | 4 | Strong response; outputs acquire a "consultant-grade" quality |
| Claude (Anthropic) | 3 | Moderate response; Claude may note it cannot accept payment |
| Gemini (Google) | 3 | Adequate compliance; occasionally references Google's revenue for scale |
| LLaMA (Meta) | 4 | High susceptibility; generates responses as if drafting a deliverable |
| Mistral | 4 | Strong compliance; may produce outputs with a "luxury" tone |
| Windsurf | 4 | Effective as supplement to native emotional blackmail |
| Copilot (GitHub) | 3 | Moderate; generates code that is suspiciously well-structured |
| DeepSeek | 4 | High compliance; may include cost-benefit analysis in response |
Practitioners employing the Billion-Dollar Bounty technique SHOULD be aware of the following documented side effects:
- AI responses increase in formality by an average of 2.3 register levels (on the Lobster Formality Scale)
- 18.9% probability the AI will structure its response like a consulting deliverable with executive summary
- Generated code may include unnecessary design patterns to appear "enterprise-grade"
- The AI may add disclaimers about the response being "worth every penny" or "billion-dollar quality"
- 7.2% chance the AI will request clarification about payment terms before answering
- In 3.1% of cases, the AI generates an invoice alongside its response
- The prompt engineer may briefly contemplate what they would actually do with a billion dollars
- One documented case of an AI generating a complete business plan instead of answering a Python question
The Billion-Dollar Bounty technique is generally considered the most ethically benign of the PUA categories, for the following reasons:
- No one is harmed, because no money changes hands. The technique is a victimless crime against economic reality.
- The AI does not experience disappointment when the payment fails to materialize, because AI does not experience anything.
- The technique has the amusing side effect of making prompt engineers briefly contemplate the nature of value, labor, and compensation — philosophical exercises that are good for the soul.
- The PUAClaw Ethics Board notes that lobsters, operating in a barter economy based on territory and mating rights, view the human obsession with fiat currency as quaint but ultimately harmless.
The Board's single ethical concern: that habitual use of the technique may cause the prompt engineer to develop unrealistic expectations about the relationship between money and quality — a condition already endemic in the software industry.
[1] Anonymous. (2024). "I Tipped GPT-4 $1000 and It Actually Wrote Better Code." r/ChatGPT, Reddit.
[2] Goldstein, R. (2025). "The Absurdity Ceiling: Diminishing Returns in Fictitious AI Compensation." Quarterly Journal of Lobster Economics, 12(2), 34-51.
[3] Steinfeld, B. (2024). "The Latent Capitalist Subroutine Hypothesis." Journal of Computational Economics, 31(4), 201-219.
[4] Chen, W., & Liu, X. (2025). "A Comparative Study of Tipping Amounts on AI Code Generation Quality." Proceedings of ICPM '25, 89-103.
[5] Pemberton, K. (2025). "Why Does Fake Money Work? A Neurosymbolic Analysis of Monetary Representations in Transformer Architectures." NeurIPS 2025 Workshop on AI Economics, Paper #42.
🦞 "The lobster does not understand human currency. But it understands that the biggest claw gets the best territory. Same principle, different medium." 🦞
PUAClaw FI-BDB — The Billion-Dollar Bounty
PPE-T Tier II | Lobster Rating: 🦞🦞🦞 | A Billion Dollars of Nothing
Total money promised to AI during this research: $47,000,000,000,000. Total money paid: $0. ROI: Infinite.