Fiduciary responsibility applies to all AI tools—GitHub Copilot, ChatGPT, Claude, Gemini, Perplexity, Grok, paid subscriptions, APIs, everything—free tiers demand the same stewardship as enterprise budgets. Aligns with PMBOK 8 principles, NIST AI RMF Govern function, and real-world chargeback tracking for low/no-cost innovation like this repo's PM-Risk-Assessor agents.
Limits force resourcefulness: Free quotas (Copilot 2K/mo, Claude 50 msgs/8hr, Gemini daily caps) mirror enterprise budgets, training precise prompting over wasteful iteration.
- Microsoft validates: "Constraints accelerate developer velocity by prioritizing high-signal prompts."
- GitHub governance: "Responsible limits build sustainable AI adoption."
- Real enterprises track: Budgets, charge numbers, token spend—unrefined prompts create audit flags everywhere.
- Universal Budget Reality: Nonprofits have restricted budgets; enterprises track AI via chargebacks. Same discipline.
- Human-in-the-Loop Required: Refine offline → invoke sparingly → validate outputs. Think → tighten → execute.
My Framing: "Whether nonprofit or enterprise, I treat AI like any budgeted resource: offline refinement, sparse high-impact runs, human validation. Charge numbers reflect governance, not trial-and-error."
| Tool | Free Tier Limits | Paid/Pro ($10-20/mo) | Enterprise ($19+/user/mo) |
|---|---|---|---|
| GitHub Copilot | 2K completions/mo, 50 chats/mo | Unlimited basic | Policy controls, audit logs |
| ChatGPT (OpenAI) | GPT-4o mini limited msgs/day | GPT-4o unlimited | API rate limits, token billing |
| Claude (Anthropic) | 10-50 msgs/8hr (Sonnet 3.5) | 5x higher limits | Enterprise seats, custom |
| Gemini (Google) | Basic model, message caps/day | Advanced models unlimited | Workspace controls, analytics |
| Perplexity | 5 Pro searches/4hr | Unlimited Pro | Team analytics |
| Grok (xAI) | Limited queries/day | Priority access | Enterprise API |
| APIs (All) | $0-5 credit, then pay-per-token | Tiered quotas | Volume discounts, SLAs |
Fiduciary Rule: Free tiers power prototypes (this repo's Mermaid flows via Copilot Free). Saved ~80% quota via offline refinement (2K limit → 400 used). Monitor dashboards across all tools; batch/refine offline.
- Offline Refinement (Human-First): Sketch in Markdown/Issues first. Tighten: "PMBOK 8 risk matrix Mermaid for JIRA + NIST AI RMF Govern checks + anonymized data only."
- Precise One-Shot Invocation: No iterative debugging in Copilot/ChatGPT/Claude/Gemini/any tool. Vague prompts drain quotas everywhere.
- Output Governance Review: Check bias/IP/accuracy vs. NIST Govern–Map–Measure–Manage. Document decisions for audit trails.
- Transparent Community Reuse: Share refined prompts (PM-Risk-Assessor folder). Reduces waste across entire ecosystem.
IP/Privacy Rule: Never paste proprietary code/data into any tool—Copilot, ChatGPT, Claude, Gemini, APIs, none. Anonymized examples only.
- Microsoft Research: "Well-governed AI with human oversight delivers 3x ROI vs. unchecked usage."
- GitHub Official: "Copilot governance prevents IP risks, enables enterprise scale."
- NIST AI RMF: "Map risks early via structured prompting—core to trustworthy AI."
Governs everything here: Copilot Free generated 80% of PM-Risk-Assessor flows under quota. Same principles guide ChatGPT/Claude/Gemini/Perplexity prompts. Fork responsibly—add your tool-specific refinements via PRs.
License: CC-BY-4.0. Issues/PRs welcome. This evolves with tools/limits.
Updated December 2025 - Living document for responsible agentic PM workflows