Skip to content

Optimize AI prompts for provider caching#1039

Open
ryokun6 wants to merge 1 commit intomainfrom
cursor/optimize-ai-prompt-caching-b4fb
Open

Optimize AI prompts for provider caching#1039
ryokun6 wants to merge 1 commit intomainfrom
cursor/optimize-ai-prompt-caching-b4fb

Conversation

@ryokun6
Copy link
Copy Markdown
Owner

@ryokun6 ryokun6 commented Apr 8, 2026

Summary

  • Add shared prompt-cache helpers for Anthropic cache-control metadata, long-content cache marks, per-step cache marks, and provider-option merging.
  • Mark reusable system prompts in chat, Telegram, memory extraction/processing, applet AI, IE generation, room replies, parse-title, and song title AI parsing.
  • Add per-step Anthropic cache control for Ryo chat and Telegram tool loops while preserving OpenAI reasoning provider options.
  • Expand provider-options unit coverage for prompt caching helpers.

Testing

  • bun test tests/test-ai-model-provider-options.test.ts tests/test-telegram-status.test.ts
  • bun run test:unit
  • bun run build
Open in Web Open in Cursor 

Co-authored-by: Ryo Lu <me@ryo.lu>
@ryos-deploy
Copy link
Copy Markdown

ryos-deploy Bot commented Apr 8, 2026

The preview deployment for ryos-dev failed. 🔴

Open Build Logs | Open Application Logs

Last updated at: 2026-04-08 12:25:13 CET

@ryokun6 ryokun6 marked this pull request as ready for review April 8, 2026 14:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants