-
Notifications
You must be signed in to change notification settings - Fork 1.9k
Description
使用官方提供的容器镜像,安装paddlepaddle-dcu,3.2.0版本报错
安装:
拉取镜像:
docker pull ccr-2vdh3abv-pub.cnc.bj.baidubce.com/paddlepaddle/paddle-dcu:dtk24.04.1-kylinv10-gcc82
参考如下命令,启动容器:
docker run -it --name paddle-dcu-dev -v pwd:/work -w=/work --shm-size=128G --network=host --privileged --cap-add=SYS_PTRACE --security-opt seccomp=unconfined ccr-2vdh3abv-pub.cnc.bj.baidubce.com/paddlepaddle/paddle-dcu:dtk24.04.1-kylinv10-gcc82 /bin/bash
执行以下命令安装:
python -m pip install paddlepaddle-dcu==3.2.0 -i https://www.paddlepaddle.org.cn/packages/stable/dcu/
报错:
[root@localhost data1]# HIP_VISIBLE_DEVICES=0 paddlespeech_server start --config_file application_gpu.yaml
[2025-10-30 10:01:05,965] [ INFO] - start to init the engine
[2025-10-30 10:01:05,965] [ INFO] - asr : python engine.
W1030 10:01:11.446635 1189 gpu_resources.cc:114] Please NOTE: device: 0, GPU Compute Capability: 90.2, Driver API Version: 50724.2, Runtime API Version: 50724.2
[2025-10-30 10:01:12,484] [ ERROR] - Failed to start server.
[2025-10-30 10:01:12,485] [ ERROR] - The value of end must be finite, but received: 0.
更换paddlepaddle-dcu 3.0.0版本:
python -m pip install paddlepaddle-dcu==3.0.0 -i https://www.paddlepaddle.org.cn/packages/stable/dcu/
报错:
paddlespeech_server start --config_file application_gpu.yaml
[2025-10-30 10:15:19,576] [ INFO] - start to init the engine
[2025-10-30 10:15:19,576] [ INFO] - asr : python engine.
W1030 10:15:24.774974 1269 gpu_resources.cc:119] Please NOTE: device: 0, GPU Compute Capability: 90.2, Driver API Version: 50724.2, Runtime API Version: 50724.2
W1030 10:15:25.802332 1269 dygraph_functions.cc:84820] got different data type, run type promotion automatically, this may cause data type been changed.
python3.10: /paddle/third_party/eigen3/unsupported/Eigen/CXX11/src/Tensor/TensorExecutor.h:612: static void Eigen::internal::TensorExecutor<const Eigen::TensorAssignOp<Eigen::TensorStridingSlicingOp<const Eigen::DSizes<long, 3>, const Eigen::DSizes<long, 3>, const Eigen::DSizes<long, 3>, Eigen::TensorMap<Eigen::Tensor<float, 3, 1>, 0>>, const Eigen::TensorMap<Eigen::Tensor<float, 3, 1>, 0>>, Eigen::GpuDevice, false, Eigen::internal::Off>::run(const Expression &, const Eigen::GpuDevice &) [Expression = const Eigen::TensorAssignOp<Eigen::TensorStridingSlicingOp<const Eigen::DSizes<long, 3>, const Eigen::DSizes<long, 3>, const Eigen::DSizes<long, 3>, Eigen::TensorMap<Eigen::Tensor<float, 3, 1>, 0>>, const Eigen::TensorMap<Eigen::Tensor<float, 3, 1>, 0>>, Device = Eigen::GpuDevice, Vectorizable = false, Tiling = Eigen::internal::Off]: Assertion `hipGetLastError() == hipSuccess' failed.
Aborted (core dumped)
使用的测试脚本(application_gpu.yaml):
This is the parameter configuration file for PaddleSpeech Offline Serving.
#################################################################################
SERVER SETTING
#################################################################################
host: 0.0.0.0
port: 8090
The task format in the engin_list is: _
task choices = ['asr_python', 'asr_inference', 'tts_python', 'tts_inference', 'cls_python', 'cls_inference', 'text_python', 'vector_python']
protocol: 'http'
engine_list: ['asr_python', 'tts_python', 'cls_python', 'text_python', 'vector_python']
#################################################################################
ENGINE CONFIG
#################################################################################
################################### ASR #########################################
################### speech task: asr; engine_type: python #######################
asr_python:
model: 'conformer_wenetspeech'
lang: 'zh'
sample_rate: 16000
cfg_path: # [optional]
ckpt_path: # [optional]
decode_method: 'attention_rescoring'
force_yes: True
device: # set 'gpu:id' or 'cpu'
################### speech task: asr; engine_type: inference #######################
asr_inference:
# model_type choices=['deepspeech2offline_aishell']
model_type: 'deepspeech2offline_aishell'
am_model: # the pdmodel file of am static model [optional]
am_params: # the pdiparams file of am static model [optional]
lang: 'zh'
sample_rate: 16000
cfg_path:
decode_method:
force_yes: True
am_predictor_conf:
device: # set 'gpu:id' or 'cpu'
switch_ir_optim: True
glog_info: False # True -> print glog
summary: True # False -> do not show predictor config
################################### TTS #########################################
################### speech task: tts; engine_type: python #######################
tts_python:
# am (acoustic model) choices=['speedyspeech_csmsc', 'fastspeech2_csmsc',
# 'fastspeech2_ljspeech', 'fastspeech2_aishell3',
# 'fastspeech2_vctk', 'fastspeech2_mix',
# 'tacotron2_csmsc', 'tacotron2_ljspeech']
am: 'fastspeech2_csmsc'
am_config:
am_ckpt:
am_stat:
phones_dict:
tones_dict:
speaker_dict:
# voc (vocoder) choices=['pwgan_csmsc', 'pwgan_ljspeech', 'pwgan_aishell3',
# 'pwgan_vctk', 'mb_melgan_csmsc', 'style_melgan_csmsc',
# 'hifigan_csmsc', 'hifigan_ljspeech', 'hifigan_aishell3',
# 'hifigan_vctk', 'wavernn_csmsc']
voc: 'mb_melgan_csmsc'
voc_config:
voc_ckpt:
voc_stat:
# others
lang: 'zh'
device: 'gpu:1' # set 'gpu:id' or 'cpu'
################### speech task: tts; engine_type: inference #######################
tts_inference:
# am (acoustic model) choices=['speedyspeech_csmsc', 'fastspeech2_csmsc']
am: 'fastspeech2_csmsc'
am_model: # the pdmodel file of your am static model (XX.pdmodel)
am_params: # the pdiparams file of your am static model (XX.pdipparams)
am_sample_rate: 24000
phones_dict:
tones_dict:
speaker_dict:
am_predictor_conf:
device: 'gpu:1' # set 'gpu:id' or 'cpu'
switch_ir_optim: True
glog_info: False # True -> print glog
summary: True # False -> do not show predictor config
# voc (vocoder) choices=['pwgan_csmsc', 'mb_melgan_csmsc','hifigan_csmsc']
voc: 'mb_melgan_csmsc'
voc_model: # the pdmodel file of your vocoder static model (XX.pdmodel)
voc_params: # the pdiparams file of your vocoder static model (XX.pdipparams)
voc_sample_rate: 24000
voc_predictor_conf:
device: 'gpu:1' # set 'gpu:id' or 'cpu'
switch_ir_optim: True
glog_info: False # True -> print glog
summary: True # False -> do not show predictor config
# others
lang: 'zh'
################################### CLS #########################################
################### speech task: cls; engine_type: python #######################
cls_python:
# model choices=['panns_cnn14', 'panns_cnn10', 'panns_cnn6']
model: 'panns_cnn14'
cfg_path: # [optional] Config of cls task.
ckpt_path: # [optional] Checkpoint file of model.
label_file: # [optional] Label file of cls task.
device: # set 'gpu:id' or 'cpu'
################### speech task: cls; engine_type: inference #######################
cls_inference:
# model_type choices=['panns_cnn14', 'panns_cnn10', 'panns_cnn6']
model_type: 'panns_cnn14'
cfg_path:
model_path: # the pdmodel file of am static model [optional]
params_path: # the pdiparams file of am static model [optional]
label_file: # [optional] Label file of cls task.
predictor_conf:
device: # set 'gpu:id' or 'cpu'
switch_ir_optim: True
glog_info: False # True -> print glog
summary: True # False -> do not show predictor config
################################### Text #########################################
################### text task: punc; engine_type: python #######################
text_python:
task: punc
model_type: 'ernie_linear_p3_wudao'
lang: 'zh'
sample_rate: 16000
cfg_path: # [optional]
ckpt_path: # [optional]
vocab_file: # [optional]
device: # set 'gpu:id' or 'cpu'
################################### Vector ######################################
################### Vector task: spk; engine_type: python #######################
vector_python:
task: spk
model_type: 'ecapatdnn_voxceleb12'
sample_rate: 16000
cfg_path: # [optional]
ckpt_path: # [optional]
device: # set 'gpu:id' or 'cpu'