PaddleX/latest/pipeline_deploy/serving #3524
Replies: 41 comments 133 replies
-
|
您好,paddlex --serve --pipeline image_classification这样启动服务之后,关闭服务的命令是什么呀? |
Beta Was this translation helpful? Give feedback.
-
|
paddlex --serve --pipeline {产线名称或产线配置文件路径} [{其他命令行选项}] 如果在部署多个产线, 是否要启动多个服务吗? |
Beta Was this translation helpful? Give feedback.
-
|
服务化部署支持实时数据的高性能推理? |
Beta Was this translation helpful? Give feedback.
-
|
我在测试‘高稳定性服务化部署’,使用通用OCR SDK,本地docker启动成功,client.py测试GRPCInferenceService通过,Metrics Service访问成功,但是HTTPService请求失败,一直报错400 Bad Request。想问一下HTTPService的接口调用文档有没有,请求参数是什么,怎么才能正确访问HTTPService |
Beta Was this translation helpful? Give feedback.
-
|
SDK的下载链接都失效了,麻烦修复下吧 |
Beta Was this translation helpful? Give feedback.
-
|
您好 |
Beta Was this translation helpful? Give feedback.
-
|
docker部署的,为什么λ localhost ~/PaddleX paddlex --serve --pipeline OCR |
Beta Was this translation helpful? Give feedback.
-
|
高稳定性服务化部署如何通过http方式调用 |
Beta Was this translation helpful? Give feedback.
-
|
您好,这个SDK又不能下载了 |
Beta Was this translation helpful? Give feedback.
-
|
您好,想请教下CUDA版本为12.4的话 怎么获取ccr-2vdh3abv-pub.cnc.bj.baidubce.com/paddlex/hps:paddlex3.1-gpu的版本 这个只支持CUDA版本为11.8的吧? |
Beta Was this translation helpful? Give feedback.
-
|
服务化部署如何开启高性能推理? |
Beta Was this translation helpful? Give feedback.
-
|
执行 1.1 paddlex --install serving 一直报错啊,os 版本 AlmaLinux9.6, 在linux 上你们强烈建议使用 docker安装paddlex ,这里怎么没有使用docekr 安装的paddlex 执行服务部署的教程呀, 请大佬指教, 谢谢! Using cached future-1.0.0-py3-none-any.whl (491 kB) [notice] A new release of pip is available: 25.0.1 -> 25.1.1 During handling of the above exception, another exception occurred: Traceback (most recent call last): |
Beta Was this translation helpful? Give feedback.
-
|
https://paddlepaddle.github.io/PaddleX/latest/pipeline_deploy/serving.html#23 调整后的 pipeline_config.yaml pipeline_name: OCR text_type: general use_doc_preprocessor: True SubPipelines: SubModules: |
Beta Was this translation helpful? Give feedback.
-
|
对于这种启动服务后还需下载的模型,因为网络原因无法下载怎么办,可以手动挂载吗 I0718 07:56:17.887946 7 grpc_server.cc:4117] Started GRPCInferenceService at 0.0.0.0:8001 The above exception was the direct cause of the following exception: Traceback (most recent call last): The above exception was the direct cause of the following exception: Traceback (most recent call last): |
Beta Was this translation helpful? Give feedback.
-
|
2.4.2 手动构造 HTTP 请求,构造的请求体格式不对,"data"中的json应该是String类型。 |
Beta Was this translation helpful? Give feedback.
-
|
经验证H20使用ccr-2vdh3abv-pub.cnc.bj.baidubce.com/paddlex/hps:paddlex3.2-gpu镜像(cuda11.8),会出现无法识别图片的情况(即:识别的图片是空的、返回的rec_texts也是空的),请问是否有支持cuda12.6的镜像可以使用? |
Beta Was this translation helpful? Give feedback.
-
|
目前 50 系显卡是不是依旧不支持高性能推理以及高稳定性部署? |
Beta Was this translation helpful? Give feedback.
-
|
你好,我正在v100服务器上使用高稳定性服务化部署OCR服务,实例配置情况为: 1 backend: "python"
2 max_batch_size: 16
3 input [
4 {
5 name: "input"
6 data_type: TYPE_STRING
7 dims: [ 1 ]
8 }
9 ]
10 output [
11 {
12 name: "output"
13 data_type: TYPE_STRING
14 dims: [ 1 ]
15 }
16 ]
17 instance_group [
18 {
19 count: 2
20 kind: KIND_GPU
21 gpus: [ 2, 3, 4 ]
22 }
23 ]实例可以正常运行。但当我使用nvidia-smi实时监控gpu占用时,发现在异步并发调用接口(grpc方式)时,始终只能在一张卡上进行推理计算,现象是:在gpu 2上推理几秒钟,切到gpu 3上推理几秒钟……始终无法在多块gpu上并行计算,请问这个是什么原因,有无解决办法呢? |
Beta Was this translation helpful? Give feedback.
-
|
请教一下,我们用目标检测产线在工业上有一个应用,是检测针孔端子计数的项目,模型用的是:Cascade-FasterRCNN-ResNet50-vd-SSLDv2-FPN。在训练自定义数据集的时候有几个问题请教一下: |
Beta Was this translation helpful? Give feedback.
-
|
镜像ccr-2vdh3abv-pub.cnc.bj.baidubce.com/paddlex/hps:paddlex3.2-gpu |
Beta Was this translation helpful? Give feedback.
-
|
您好,我已经按照教程在CUDA11.8的镜像中安装了 hpip-gpu和paddle2onnx。但是 paddlex --serve --pipeline OCR --port 8118 --use_hpip 这条命令行会提示: The Paddle Inference backend is selected with the default configuration. This may not provide optimal performance. paddlex --serve --pipeline OCR --port 8118 --use_hpip --hpi_config '{"backend": "onnxruntime"}' 这条命令行会报错:No inference backend and configuration could be suggested. Reason: 'onnxruntime' is not a supported inference backend. 请问我是遗漏了什么步骤吗? |
Beta Was this translation helpful? Give feedback.
-
|
http请求有文档嘛 |
Beta Was this translation helpful? Give feedback.
-
我下载【通用版面解析 v3】SDK ,使用docker run At: E1020 10:19:27.707704 7 model_repository_manager.cc:1186] failed to load 'layout-parsing' version 1: Internal: UnboundLocalError: local variable 'transpose_weight_keys' referenced before assignment At: I1020 10:19:27.707830 7 server.cc:522] I1020 10:19:27.707855 7 server.cc:549] I1020 10:19:27.707892 7 server.cc:592] I1020 10:19:27.707954 7 tritonserver.cc:1920] I1020 10:19:27.707966 7 server.cc:252] Waiting for in-flight requests to complete. |
Beta Was this translation helpful? Give feedback.
-
|
您好,使用paddlex3.0.1 镜像,安装服务化部署插件 paddlex --install serving Traceback (most recent call last): |
Beta Was this translation helpful? Give feedback.
-
|
请问有对应的 api 文档吗? |
Beta Was this translation helpful? Give feedback.
-
|
CPU 模式 请求后 返回响应码404 请问是API的请求有改动了吗 |
Beta Was this translation helpful? Give feedback.
-
|
一言难尽的文档,要啥没啥。 |
Beta Was this translation helpful? Give feedback.
-
|
我在使用高稳定性服务化部署(docker)时,高并发请求访问调用grpc接口,约会有10%左右的请求返回空结果,但是实际上这些图片中有文字内容。当我将这些图片再次调用grpc接口进行计算时,其中又有一部分可以正常输出结果。我检查了docker容器日志,发现会偶发性产生如下错误: [ ERROR] [2025-11-03 06:21:12,130] [60098771ab20491380763bb02c35a93b] [b70b0af0-16c0-4e41-a52c-53d81ae7b4ea] - Unhandled exception
Traceback (most recent call last):
File "/paddlex/py310/lib/python3.10/site-packages/paddlex_hps_server/base_model.py", line 88, in execute
result_or_output = self.run(input_, log_id)
File "/paddlex/var/paddlex_model_repo/ocr/1/model.py", line 80, in run
images, data_info = utils.file_to_images(
File "/paddlex/py310/lib/python3.10/site-packages/paddlex/inference/serving/infra/utils.py", line 252, in file_to_images
data_info = get_image_info(images[0])
File "/paddlex/py310/lib/python3.10/site-packages/paddlex/inference/serving/infra/utils.py", line 261, in get_image_info
return ImageInfo(width=image.shape[1], height=image.shape[0])
AttributeError: 'NoneType' object has no attribute 'shape'想请问这种情况是什么原因产生,以及如何解决?谢谢! |
Beta Was this translation helpful? Give feedback.
-
|
能不能提供详细的http请求文档,想要用本机图片测试路径怎么设置,以及batch推理是否支持,该怎么传递图片 |
Beta Was this translation helpful? Give feedback.
-
|
ccr-2vdh3abv-pub.cnc.bj.baidubce.com/paddlex/paddlex:paddlex3.3.4-paddlepaddle3.2.0-gpu-cuda12.9-cudnn9.9这个镜像太大了,接近60G,有没有小一点的,适合部署的,我只需要OCR相关产线 |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
PaddleX/latest/pipeline_deploy/serving
https://paddlepaddle.github.io/PaddleX/latest/pipeline_deploy/serving.html
Beta Was this translation helpful? Give feedback.
All reactions