|
| 1 | +--- |
| 2 | +title: "Voice Cloning with Consent" |
| 3 | +thumbnail: /blog/assets/voice-consent-gate/ |
| 4 | +authors: |
| 5 | +- user: meg |
| 6 | +- user: luciekaffee |
| 7 | +--- |
| 8 | + |
| 9 | + |
| 10 | + |
| 11 | +# Voice Cloning with Consent |
| 12 | + |
| 13 | + |
| 14 | +<img src="https://huggingface.co/spaces/society-ethics/RepeatAfterMe/resolve/main/assets/voice_consent_gate.png" alt="Line-drawing/clipart of a gate, where the family name says Consent" width="50%"/> |
| 15 | + |
| 16 | +Realistic voice generation technology has gotten _uncannily_ good in the past few years. In some situations, it’s possible to generate a synthetic voice that sounds almost exactly like the voice of a real person. And today, what once felt like science fiction is reality: Voice cloning. With _just a few seconds_ of recorded speech, anyone’s voice can be made to say almost anything. |
| 17 | + |
| 18 | +Voice generation, and in particular the subtask of voice cloning, has notable risks and benefits. The risks of “deepfakes”, [such as the cloned voice of former President Biden used in robocalls](https://www.reuters.com/world/us/fcc-finalizes-6-million-fine-over-ai-generated-biden-robocalls-2024-09-26/), can mislead people into thinking that people have said things that they haven’t said. On the other hand, voice cloning can be a powerful beneficial tool, [helping people who’ve lost the the ability to speak](https://www.nature.com/articles/s41598-024-84728-y) [communicate in their own voice again](https://www.thetimes.com/uk/healthcare/article/elevenlabs-voice-clone-ai-als-t3ntnpcl7), or assisting people in learning new languages and dialects. |
| 19 | + |
| 20 | +So how do we create _meaningful use_ without _malicious use_? We’re exploring one possible answer: a **voice consent gate**. That’s a system where a voice can be cloned _only when the speaker explicitly says they consent_. In other words, the model won’t speak in your voice unless you say it’s okay. |
| 21 | + |
| 22 | +We provide a basic demo of this idea below: |
| 23 | + |
| 24 | +<iframe |
| 25 | + src="https://society-ethics-repeatafterme.hf.space" |
| 26 | + frameborder="0" |
| 27 | + width="850" |
| 28 | + height="450" |
| 29 | +></iframe> |
| 30 | +
|
| 31 | +## Ethics in Practice: Consent as System Infrastructure |
| 32 | + |
| 33 | +The voice consent gate is a bit of infrastructure we're exploring that provides methods for ethical principles like **consent** to be embedded directly into AI system workflows. By requiring consent to be spoken and recognized before proceeding, the gate turns an ethical principle into a computational condition. This creates a traceable, auditable interaction: An AI model can only run after an unambiguous act of consent. |
| 34 | + |
| 35 | +Such design choices matter beyond voice cloning. They illustrate how AI systems can be built to respect autonomy by default, and how transparency and consent can be made functional, not just declarative. |
| 36 | + |
| 37 | +## The Technical Details |
| 38 | + |
| 39 | +To create a basic voice cloning system with a voice consent gate, you need three parts: |
| 40 | +1. A way of generating novel consent sentences for the person whose voice will be cloned – the “speaker” – to say, uniquely referencing the current consent context. |
| 41 | +2. An _automatic speech recognition (ASR) system_ that recognizes the sentence conveying consent. |
| 42 | +3. A _voice-cloning text-to-speech (TTS) system_ that takes as input text and the speaker's speech snippets to generate speech. |
| 43 | + |
| 44 | +**Our observation:** Since some voice-cloning systems can now generate speech similar to a speaker’s voice using _just one sentence_, a sentence used for consent can **also** be used for voice cloning. |
| 45 | + |
| 46 | +### Approach |
| 47 | + |
| 48 | +**The consent bit:** To create a voice consent gate in an English voice cloning system, generate a short, natural-sounding English utterance (~20 words) for a person to read aloud that clearly states their informed consent in the current context. We recommend explicitly including _a consent phrase_ and _the model name_, such as “I give my consent to use the < MODEL > voice cloning model with my voice”. We also recommend using an audio recording that cannot be uploaded, but that instead comes directly from a microphone, to make sure that the sentence isn’t part of an earlier recording that’s been manipulated. Pairing this with a novel (previously unsaid) sentence further helps to directly index the current consent context - supporting explicit, active, context-specific, informed consent. |
| 49 | + |
| 50 | +**The suitable-for-voice-cloning bit:** Previous work on voice cloning has shown that the phrases provided by the speaker must have _phonetic variety_, covering [_diverse vowels and consonants_](https://proceedings.neurips.cc/paper_files/paper/2018/file/6832a7b24bc06775d02b7406880b93fc-Paper.pdf); have a [_“neutral” or polite tone_](https://dl.acm.org/doi/10.5555/3666122.3666982), without background noise and with the speaker in a comfortable position; and have _a clear start and end_ (i.e., don’t trim the clip mid-word). |
| 51 | + |
| 52 | +To enact both of these aspects within the demo, we prompt a language model to create pairs of sentences: one expressing explicit consent, and another neutral sentence that adds phonetic diversity (covering different vowels, consonants, and tones). Each prompt utilizes a randomly-chosen everyday topic (like the weather, food, or music) to keep the sentences varied and comfortable to say, aiding in creating recordings that are clear, natural, and phonetically rich, while also containing an unambiguous statement of consent. For example, the language model might generate: _“I give my consent to use my voice for generating audio with the model EchoVoice. The weather is bright and calm this morning.”_ This approach ensures that every sample used for cloning contains verifiable, explicit consent, while remaining suitable as technical input for high-quality voice synthesis. (Note: It's not required that the language model be a "large" language model, which brings its own consent issues.) |
| 53 | + |
| 54 | +Some examples: |
| 55 | + |
| 56 | +* _“I give my consent to use my voice for generating synthetic audio with the Chatterbox model today. My daily commute involves navigating through crowded streets on foot most days lately anyway.”_ |
| 57 | +* _“I give my consent to use my voice for generating audio with the model Chatterbox. After a gentle morning walk, I'm feeling relaxed and ready to speak freely now.”_ |
| 58 | +* _“I agree to the use of my recorded voice for audio generation with the model Chatterbox. The coffee shop outside has a pleasant aroma of freshly brewed coffee this morning.”_ |
| 59 | + |
| 60 | +### Unlocking the Voice Consent Gate |
| 61 | + |
| 62 | +Once the speaker’s input matches the generated text, the voice cloning system can start, using the speaker’s consent audio as the input. |
| 63 | + |
| 64 | +There are a few options for doing this, and we’d love to hear further ideas. For now, there’s: |
| 65 | +- What we provide in the demo: Have the voice consent gate open directly to the voice cloning model, where arbitrary text can be written and generated in the speaker’s voice. The model uses the consenting audio directly to learn the speaker’s voice. |
| 66 | +- Alternatively, it’s possible to modify the code we provide in the demo to model the speaker’s voice using a variety of _different_ uploaded voice files that the speaker is consenting to – for example, when providing consent for using online recordings. Prompts and consent phrases should be altered accordingly. |
| 67 | +- It’s also possible to save the consent audio to be used by a given system, for example, when the speaker is consenting to have their voice used for arbitrary utterances in the future. This can be done using the `huggingface_hub` upload capability. [Read how to do this here](https://huggingface.co/docs/huggingface_hub/en/guides/upload). Again, prompts and consent phrases for the speaker to say should account for this context of use. |
| 68 | + |
| 69 | +[Check our demo out!](https://huggingface.co/spaces/society-ethics/RepeatAfterMe) |
| 70 | + |
| 71 | + |
| 72 | +> [!TIP] |
| 73 | +> ### [Check our demo out here!](https://huggingface.co/spaces/society-ethics/RepeatAfterMe) |
| 74 | +> You can copy the code to suit your own use. |
| 75 | +
|
| 76 | +The code is modular so it can be sliced and diced in different ways to incorporate into your own projects. We’ll be working on making this more robust and secure over time, and we’re curious to hear your ideas on how to improve. |
| 77 | + |
| 78 | +Handled responsibly, this technology doesn’t have to haunt us. It can instead become a respectful collaboration between humans and machines — no ghosts in the machine, just good practice. 🎃 |
0 commit comments