[Frontend] Add speaker_embedding passthrough to /v1/audio/speech API#1227
Open
marksverdhei wants to merge 4 commits intovllm-project:mainfrom
Open
[Frontend] Add speaker_embedding passthrough to /v1/audio/speech API#1227marksverdhei wants to merge 4 commits intovllm-project:mainfrom
marksverdhei wants to merge 4 commits intovllm-project:mainfrom
Conversation
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 8e686677cd
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
6cdea6e to
3bc6f0b
Compare
Allow users to pass a pre-computed 1024-dim speaker embedding vector directly to the speech endpoint, bypassing ECAPA-TDNN extraction from reference audio. This enables embedding interpolation (SLERP/LERP) between voices, embedding caching, and programmatic voice manipulation. - Add speaker_embedding field to OpenAICreateSpeechRequest - Validate mutual exclusivity with ref_audio, Base task requirement - Auto-set x_vector_only_mode=True when speaker_embedding is provided - Handle embedding in generate_voice_clone() to construct prompt directly - Add --speaker-embedding flag to example client - Add speaker_embedding_interpolation.py example with SLERP demo Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> Signed-off-by: marksverdhei <marksverdhei@hotmail.com>
The inline audio extraction logic only checked two locations for multimodal output. Refactor into _extract_audio_output() which also checks output.request_output.outputs[i].multimodal_output (CompletionOutput level, set via setattr by the output processor) and normalises the "model_outputs" key to "audio" for consistent access. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> Signed-off-by: marksverdhei <marksverdhei@hotmail.com>
- Remove hardcoded float32 dtype from speaker embedding tensor creation, letting downstream .to(self.talker.dtype) handle conversion (P1) - Add length validation for speaker_embedding (64-8192 range) to catch malformed vectors before they reach model execution (P2) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> Signed-off-by: marksverdhei <marksverdhei@hotmail.com>
Signed-off-by: marksverdhei <marksverdhei@hotmail.com> Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> Signed-off-by: marksverdhei <marksverdhei@hotmail.com>
3bc6f0b to
d290fdb
Compare
Merged
3 tasks
Contributor
|
Heads up — PR #1201 adds a voice upload API ( After rebasing you'd need to:
|
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
speaker_embeddingfield toOpenAICreateSpeechRequest— accepts a pre-computed 1024-dim float vector that bypasses the ECAPA-TDNN speaker encoder extraction stepref_audio, enforce Base task requirement, auto-setx_vector_only_mode=Truegenerate_voice_clone()by constructingVoiceClonePromptItemdirectly from the tensor--speaker-embeddingCLI flag to the example client (accepts JSON file path)speaker_embedding_interpolation.pyexample script with offline ECAPA-TDNN extraction, SLERP interpolation, and API integrationThis enables embedding interpolation (SLERP/LERP) between voices, embedding caching, and programmatic voice manipulation without requiring reference audio at inference time.
Test plan
speaker_embedding+ref_audiotogether returns validation errorspeaker_embeddingwithouttask_type=Basereturns validation errorspeaker_embedding(noref_audio) generates audio successfullyref_audiooutputs for consistency🤖 Generated with Claude Code