Skip to content

把 llm的 bitnet 工作放入正确文件夹. #1212

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 21 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
21 commits
Select commit Hold shift + click to select a range
bcd8b5a
我在本地的一些修改
Mar 22, 2025
29e1d85
Merge branch 'main' of https://github.com/FunAudioLLM/CosyVoice into …
Mar 22, 2025
3e2bc9c
一些函数加了注释,实现双流推理模式
Apr 7, 2025
12f9026
生成 tensorrt 的 plan 格式,把 rtf 降到 1 以内.实时性提升, 生成帧率调节到合适的数值
Apr 8, 2025
7b87998
融合了进度条, 流式缩放因子
Apr 8, 2025
9e133dd
把之前对 cosy-ex的修改更新到本仓库了
Apr 10, 2025
629d9fa
做多个实验,JIT 功能反而会降低速度.本地 ok
Apr 14, 2025
425e3d5
修改 cosvoice_2_demo 使之本地可以播放流音频. 这次采取的是从原子自己构建,后续还是会用他人已经封装好的工具
Apr 15, 2025
e72cf6a
构建流式播放器,解决填充静音的问题
Apr 15, 2025
2b96fae
cosvoice_2_demo, 和 stream_player, 改为拼接 buffer 的方式,而不是原来元素入队列的方式
Apr 17, 2025
346ce0e
完成与 cosyvoice 更新代码的合并
Apr 17, 2025
c84197a
合并 cosyvoice 最新的提交
Apr 17, 2025
946e998
修正了 bug,使用缓存之后,就不需要再用 token_offset. 另外使用新代码,模型也需要从魔搭重新下载
Apr 22, 2025
3083ab0
update 多次的缓存问题
Apr 22, 2025
4757dd7
更新 .gitignore 文件,注释掉对 *.wav 文件的忽略,并添加新的音频和文本文件。修改 cosyvoice_2_demo.py…
Apr 24, 2025
7f43f2b
.wav 仍然加入 ignore 清单里面
Apr 24, 2025
3317e1a
更新 cosyvoice_2_demo.py,注释掉多个音频加载和推理相关的代码,调整 StreamPlayer 的块大小以优化性能,同时…
Apr 25, 2025
d04f238
更新 cosyvoice_2_demo.py 中的 CUDA 设备设置,修改 stream_player.py 以支持从字节流处理音频数据…
May 8, 2025
5255e8d
优化 fastapi/client.py 中的音频请求处理逻辑,添加请求超时参数,增强错误处理机制,确保在接收音频数据时进行有效的状态检查…
May 8, 2025
a20b5f7
更新 fastapi/client.py,注释掉 StreamPlayer 相关代码,调整音频数据接收块大小,优化日志记录频率。新增 re…
May 8, 2025
c932ac0
在 fastapi/server.py 中添加了 CUDA 设备设置,新增 process_audio_chunk 函数以处理音频数据,优…
May 8, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .gitignore
100644 → 100755
Original file line number Diff line number Diff line change
Expand Up @@ -49,4 +49,4 @@ compile_commands.json
pretrained_models/*
*_pb2_grpc.py
*_pb2.py
*.tar
*.tar
244 changes: 12 additions & 232 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,241 +1,21 @@
[![SVG Banners](https://svg-banners.vercel.app/api?type=origin&text1=CosyVoice🤠&text2=Text-to-Speech%20💖%20Large%20Language%20Model&width=800&height=210)](https://github.com/Akshay090/svg-banners)

## 👉🏻 CosyVoice 👈🏻
**CosyVoice 2.0**: [Demos](https://funaudiollm.github.io/cosyvoice2/); [Paper](https://arxiv.org/abs/2412.10117); [Modelscope](https://www.modelscope.cn/studios/iic/CosyVoice2-0.5B); [HuggingFace](https://huggingface.co/spaces/FunAudioLLM/CosyVoice2-0.5B)
## 使用方法

**CosyVoice 1.0**: [Demos](https://fun-audio-llm.github.io); [Paper](https://funaudiollm.github.io/pdf/CosyVoice_v1.pdf); [Modelscope](https://www.modelscope.cn/studios/iic/CosyVoice-300M)
python cosyvoice_2_demo.py --fp16 --use_flow_cache

## Highlight🔥
然后在命令行,里面输入下面的格式,回车即可。

**CosyVoice 2.0** has been released! Compared to version 1.0, the new version offers more accurate, more stable, faster, and better speech generation capabilities.
### Multilingual
- **Supported Language**: Chinese, English, Japanese, Korean, Chinese dialects (Cantonese, Sichuanese, Shanghainese, Tianjinese, Wuhanese, etc.)
- **Crosslingual & Mixlingual**:Support zero-shot voice cloning for cross-lingual and code-switching scenarios.
### Ultra-Low Latency
- **Bidirectional Streaming Support**: CosyVoice 2.0 integrates offline and streaming modeling technologies.
- **Rapid First Packet Synthesis**: Achieves latency as low as 150ms while maintaining high-quality audio output.
### High Accuracy
- **Improved Pronunciation**: Reduces pronunciation errors by 30% to 50% compared to CosyVoice 1.0.
- **Benchmark Achievements**: Attains the lowest character error rate on the hard test set of the Seed-TTS evaluation set.
### Strong Stability
- **Consistency in Timbre**: Ensures reliable voice consistency for zero-shot and cross-language speech synthesis.
- **Cross-language Synthesis**: Marked improvements compared to version 1.0.
### Natural Experience
- **Enhanced Prosody and Sound Quality**: Improved alignment of synthesized audio, raising MOS evaluation scores from 5.4 to 5.53.
- **Emotional and Dialectal Flexibility**: Now supports more granular emotional controls and accent adjustments.
音色代码@要说的话

## Roadmap
已经存有几个人的音色:
| 角色名称 | 音色代码 |
|---------|---------|
| 哈利波特 | hp |
| 老许 | laoxu |

- [x] 2024/12

- [x] 25hz cosyvoice 2.0 released
### 例子

- [x] 2024/09
hp@Blimey! Professor Snape's given us a mountain of potions homework. Wish I had my invisibility cloak right now. Ron, Hermione, fancy a trip to Hogsmeade?

- [x] 25hz cosyvoice base model
- [x] 25hz cosyvoice voice conversion model

- [x] 2024/08

- [x] Repetition Aware Sampling(RAS) inference for llm stability
- [x] Streaming inference mode support, including kv cache and sdpa for rtf optimization

- [x] 2024/07

- [x] Flow matching training support
- [x] WeTextProcessing support when ttsfrd is not available
- [x] Fastapi server and client


## Install

**Clone and install**

- Clone the repo
``` sh
git clone --recursive https://github.com/FunAudioLLM/CosyVoice.git
# If you failed to clone submodule due to network failures, please run following command until success
cd CosyVoice
git submodule update --init --recursive
```

- Install Conda: please see https://docs.conda.io/en/latest/miniconda.html
- Create Conda env:

``` sh
conda create -n cosyvoice -y python=3.10
conda activate cosyvoice
# pynini is required by WeTextProcessing, use conda to install it as it can be executed on all platform.
conda install -y -c conda-forge pynini==2.1.5
pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/ --trusted-host=mirrors.aliyun.com

# If you encounter sox compatibility issues
# ubuntu
sudo apt-get install sox libsox-dev
# centos
sudo yum install sox sox-devel
```

**Model download**

We strongly recommend that you download our pretrained `CosyVoice2-0.5B` `CosyVoice-300M` `CosyVoice-300M-SFT` `CosyVoice-300M-Instruct` model and `CosyVoice-ttsfrd` resource.

``` python
# SDK模型下载
from modelscope import snapshot_download
snapshot_download('iic/CosyVoice2-0.5B', local_dir='pretrained_models/CosyVoice2-0.5B')
snapshot_download('iic/CosyVoice-300M', local_dir='pretrained_models/CosyVoice-300M')
snapshot_download('iic/CosyVoice-300M-SFT', local_dir='pretrained_models/CosyVoice-300M-SFT')
snapshot_download('iic/CosyVoice-300M-Instruct', local_dir='pretrained_models/CosyVoice-300M-Instruct')
snapshot_download('iic/CosyVoice-ttsfrd', local_dir='pretrained_models/CosyVoice-ttsfrd')
```

``` sh
# git模型下载,请确保已安装git lfs
mkdir -p pretrained_models
git clone https://www.modelscope.cn/iic/CosyVoice2-0.5B.git pretrained_models/CosyVoice2-0.5B
git clone https://www.modelscope.cn/iic/CosyVoice-300M.git pretrained_models/CosyVoice-300M
git clone https://www.modelscope.cn/iic/CosyVoice-300M-SFT.git pretrained_models/CosyVoice-300M-SFT
git clone https://www.modelscope.cn/iic/CosyVoice-300M-Instruct.git pretrained_models/CosyVoice-300M-Instruct
git clone https://www.modelscope.cn/iic/CosyVoice-ttsfrd.git pretrained_models/CosyVoice-ttsfrd
```

Optionally, you can unzip `ttsfrd` resouce and install `ttsfrd` package for better text normalization performance.

Notice that this step is not necessary. If you do not install `ttsfrd` package, we will use WeTextProcessing by default.

``` sh
cd pretrained_models/CosyVoice-ttsfrd/
unzip resource.zip -d .
pip install ttsfrd_dependency-0.1-py3-none-any.whl
pip install ttsfrd-0.4.2-cp310-cp310-linux_x86_64.whl
```

**Basic Usage**

We strongly recommend using `CosyVoice2-0.5B` for better performance.
Follow code below for detailed usage of each model.

``` python
import sys
sys.path.append('third_party/Matcha-TTS')
from cosyvoice.cli.cosyvoice import CosyVoice, CosyVoice2
from cosyvoice.utils.file_utils import load_wav
import torchaudio
```

**CosyVoice2 Usage**
```python
cosyvoice = CosyVoice2('pretrained_models/CosyVoice2-0.5B', load_jit=False, load_trt=False, fp16=False, use_flow_cache=False)

# NOTE if you want to reproduce the results on https://funaudiollm.github.io/cosyvoice2, please add text_frontend=False during inference
# zero_shot usage
prompt_speech_16k = load_wav('./asset/zero_shot_prompt.wav', 16000)
for i, j in enumerate(cosyvoice.inference_zero_shot('收到好友从远方寄来的生日礼物,那份意外的惊喜与深深的祝福让我心中充满了甜蜜的快乐,笑容如花儿般绽放。', '希望你以后能够做的比我还好呦。', prompt_speech_16k, stream=False)):
torchaudio.save('zero_shot_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)

# save zero_shot spk for future usage
assert cosyvoice.add_zero_shot_spk('希望你以后能够做的比我还好呦。', prompt_speech_16k, 'my_zero_shot_spk') is True
for i, j in enumerate(cosyvoice.inference_zero_shot('收到好友从远方寄来的生日礼物,那份意外的惊喜与深深的祝福让我心中充满了甜蜜的快乐,笑容如花儿般绽放。', '', '', zero_shot_spk_id='my_zero_shot_spk', stream=False)):
torchaudio.save('zero_shot_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)
cosyvoice.save_spkinfo()

# fine grained control, for supported control, check cosyvoice/tokenizer/tokenizer.py#L248
for i, j in enumerate(cosyvoice.inference_cross_lingual('在他讲述那个荒诞故事的过程中,他突然[laughter]停下来,因为他自己也被逗笑了[laughter]。', prompt_speech_16k, stream=False)):
torchaudio.save('fine_grained_control_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)

# instruct usage
for i, j in enumerate(cosyvoice.inference_instruct2('收到好友从远方寄来的生日礼物,那份意外的惊喜与深深的祝福让我心中充满了甜蜜的快乐,笑容如花儿般绽放。', '用四川话说这句话', prompt_speech_16k, stream=False)):
torchaudio.save('instruct_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)

# bistream usage, you can use generator as input, this is useful when using text llm model as input
# NOTE you should still have some basic sentence split logic because llm can not handle arbitrary sentence length
def text_generator():
yield '收到好友从远方寄来的生日礼物,'
yield '那份意外的惊喜与深深的祝福'
yield '让我心中充满了甜蜜的快乐,'
yield '笑容如花儿般绽放。'
for i, j in enumerate(cosyvoice.inference_zero_shot(text_generator(), '希望你以后能够做的比我还好呦。', prompt_speech_16k, stream=False)):
torchaudio.save('zero_shot_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)
```

**CosyVoice Usage**
```python
cosyvoice = CosyVoice('pretrained_models/CosyVoice-300M-SFT', load_jit=False, load_trt=False, fp16=False)
# sft usage
print(cosyvoice.list_available_spks())
# change stream=True for chunk stream inference
for i, j in enumerate(cosyvoice.inference_sft('你好,我是通义生成式语音大模型,请问有什么可以帮您的吗?', '中文女', stream=False)):
torchaudio.save('sft_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)

cosyvoice = CosyVoice('pretrained_models/CosyVoice-300M')
# zero_shot usage, <|zh|><|en|><|jp|><|yue|><|ko|> for Chinese/English/Japanese/Cantonese/Korean
prompt_speech_16k = load_wav('./asset/zero_shot_prompt.wav', 16000)
for i, j in enumerate(cosyvoice.inference_zero_shot('收到好友从远方寄来的生日礼物,那份意外的惊喜与深深的祝福让我心中充满了甜蜜的快乐,笑容如花儿般绽放。', '希望你以后能够做的比我还好呦。', prompt_speech_16k, stream=False)):
torchaudio.save('zero_shot_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)
# cross_lingual usage
prompt_speech_16k = load_wav('./asset/cross_lingual_prompt.wav', 16000)
for i, j in enumerate(cosyvoice.inference_cross_lingual('<|en|>And then later on, fully acquiring that company. So keeping management in line, interest in line with the asset that\'s coming into the family is a reason why sometimes we don\'t buy the whole thing.', prompt_speech_16k, stream=False)):
torchaudio.save('cross_lingual_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)
# vc usage
prompt_speech_16k = load_wav('./asset/zero_shot_prompt.wav', 16000)
source_speech_16k = load_wav('./asset/cross_lingual_prompt.wav', 16000)
for i, j in enumerate(cosyvoice.inference_vc(source_speech_16k, prompt_speech_16k, stream=False)):
torchaudio.save('vc_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)

cosyvoice = CosyVoice('pretrained_models/CosyVoice-300M-Instruct')
# instruct usage, support <laughter></laughter><strong></strong>[laughter][breath]
for i, j in enumerate(cosyvoice.inference_instruct('在面对挑战时,他展现了非凡的<strong>勇气</strong>与<strong>智慧</strong>。', '中文男', 'Theo \'Crimson\', is a fiery, passionate rebel leader. Fights with fervor for justice, but struggles with impulsiveness.', stream=False)):
torchaudio.save('instruct_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)
```

**Start web demo**

You can use our web demo page to get familiar with CosyVoice quickly.

Please see the demo website for details.

``` python
# change iic/CosyVoice-300M-SFT for sft inference, or iic/CosyVoice-300M-Instruct for instruct inference
python3 webui.py --port 50000 --model_dir pretrained_models/CosyVoice-300M
```

**Advanced Usage**

For advanced user, we have provided train and inference scripts in `examples/libritts/cosyvoice/run.sh`.

**Build for deployment**

Optionally, if you want service deployment,
you can run following steps.

``` sh
cd runtime/python
docker build -t cosyvoice:v1.0 .
# change iic/CosyVoice-300M to iic/CosyVoice-300M-Instruct if you want to use instruct inference
# for grpc usage
docker run -d --runtime=nvidia -p 50000:50000 cosyvoice:v1.0 /bin/bash -c "cd /opt/CosyVoice/CosyVoice/runtime/python/grpc && python3 server.py --port 50000 --max_conc 4 --model_dir iic/CosyVoice-300M && sleep infinity"
cd grpc && python3 client.py --port 50000 --mode <sft|zero_shot|cross_lingual|instruct>
# for fastapi usage
docker run -d --runtime=nvidia -p 50000:50000 cosyvoice:v1.0 /bin/bash -c "cd /opt/CosyVoice/CosyVoice/runtime/python/fastapi && python3 server.py --port 50000 --model_dir iic/CosyVoice-300M && sleep infinity"
cd fastapi && python3 client.py --port 50000 --mode <sft|zero_shot|cross_lingual|instruct>
```

## Discussion & Communication

You can directly discuss on [Github Issues](https://github.com/FunAudioLLM/CosyVoice/issues).

You can also scan the QR code to join our official Dingding chat group.

<img src="./asset/dingding.png" width="250px">

## Acknowledge

1. We borrowed a lot of code from [FunASR](https://github.com/modelscope/FunASR).
2. We borrowed a lot of code from [FunCodec](https://github.com/modelscope/FunCodec).
3. We borrowed a lot of code from [Matcha-TTS](https://github.com/shivammehta25/Matcha-TTS).
4. We borrowed a lot of code from [AcademiCodec](https://github.com/yangdongchao/AcademiCodec).
5. We borrowed a lot of code from [WeNet](https://github.com/wenet-e2e/wenet).

## Disclaimer
The content provided above is for academic purposes only and is intended to demonstrate technical capabilities. Some examples are sourced from the internet. If any content infringes on your rights, please contact us to request its removal.
laoxu@七牛毕竟,是国内最早做云存储的公司。所以我想,就是和云存储相关的交流,可以在这个会之后自由讨论的时候,知无不言,言无不尽.
68 changes: 68 additions & 0 deletions README_quantization.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
# CosyVoice 模型量化指南

本指南提供了使用不同量化方法对CosyVoice模型进行量化的步骤。

## 准备工作

首先,您需要安装相应的量化库。我们推荐使用bitsandbytes进行量化,它的兼容性最好:

```bash
pip install bitsandbytes
```

## 量化模型

### 1. 使用 BitsAndBytes 量化 (推荐)

BitsAndBytes是一种简单易用的量化方法,适合快速尝试,兼容性最好。

```bash
python quant_cosyvoice_bnb.py --model_dir pretrained_models/CosyVoice2-0.5B --output_dir pretrained_models/CosyVoice2-0.5B-bnb --bits 8
```

参数说明:
- `--model_dir`: 原始模型目录
- `--output_dir`: 量化后模型保存目录
- `--bits`: 量化位数 (4 或 8),建议先尝试8位

### 2. 使用简化的量化方法

我们提供了一个简化的量化脚本,它使用bitsandbytes库对模型进行量化,但采用了更直接的方法:

```bash
python quant_cosyvoice_gptq.py --model_dir pretrained_models/CosyVoice2-0.5B --output_dir pretrained_models/CosyVoice2-0.5B-quantized --bits 8
```

参数说明:
- `--model_dir`: 原始模型目录
- `--output_dir`: 量化后模型保存目录
- `--bits`: 量化位数 (4 或 8)
- `--block_size`: 量化块大小 (默认32)

## 使用量化后的模型

量化完成后,您可以使用以下命令测试量化后的模型:

```bash
python cosyvoice_2_demo.py --model_dir pretrained_models/CosyVoice2-0.5B-bnb
```

## 简单量化方法

如果上述方法都遇到问题,所有脚本都包含了一个简单的备选量化方法,它不依赖于特定的量化库,而是使用简单的权重量化技术。这种方法虽然不如专业量化库精确,但兼容性最好。

## 注意事项

1. 量化会导致模型质量略有下降,但通常不会显著影响语音合成质量
2. 4位量化可以显著减小模型大小,但可能会导致更多的质量损失
3. 如果遇到问题,建议先尝试8位量化,再尝试4位量化
4. 量化过程可能需要较长时间,请耐心等待

## 故障排除

如果在量化过程中遇到问题:

1. 首先尝试BitsAndBytes方法,它的兼容性最好
2. 如果出现内存错误,尝试在更大内存的机器上运行
3. 如果所有方法都失败,使用脚本中的简单量化方法
4. 确保您的Python环境干净,没有冲突的库版本
Loading