Description
The mcp__otter__otter tool's transcript action returns the full transcript in the tool response, which means the entire transcript text must flow through the calling agent's context window. For large transcripts this is wasteful -- especially when the transcript is being passed onward to another tool (e.g., an external LLM via mcp__llm__chat).
Proposed change
Add an optional output_file parameter to the transcript action (and possibly formatted_text output). When provided, the transcript text is written directly to that file path, and the tool response returns only metadata (title, speakers, segment count, file path) rather than the full text.
This mirrors the pattern used by mcp__llm__chat with its output_file parameter.
Use case
Supervision transcripts are often passed to Gemini/OpenAI to generate summaries or LaTeX write-ups. Currently the workflow requires either:
- Reading the full transcript into the main agent context (expensive)
- Manually writing the transcript to a temp file (clunky)
With output_file, the transcript could be piped directly to an LLM via files parameter without ever entering the main context.
Description
The
mcp__otter__ottertool'stranscriptaction returns the full transcript in the tool response, which means the entire transcript text must flow through the calling agent's context window. For large transcripts this is wasteful -- especially when the transcript is being passed onward to another tool (e.g., an external LLM viamcp__llm__chat).Proposed change
Add an optional
output_fileparameter to thetranscriptaction (and possiblyformatted_textoutput). When provided, the transcript text is written directly to that file path, and the tool response returns only metadata (title, speakers, segment count, file path) rather than the full text.This mirrors the pattern used by
mcp__llm__chatwith itsoutput_fileparameter.Use case
Supervision transcripts are often passed to Gemini/OpenAI to generate summaries or LaTeX write-ups. Currently the workflow requires either:
With
output_file, the transcript could be piped directly to an LLM viafilesparameter without ever entering the main context.