Skip to content

使用paddlex在昇腾的高性能部署,文档中对于检测模型的调用接口,卡死 #4836

@dolphincats

Description

@dolphincats

model = create_model(model_name="PP-OCRv4_mobile_det", model_dir="PP-OCRv4_mobile_det_infer_om_910B", device="npu:0", use_hpip=True, hpi_config=hpi_config, input_shape=[3, 640, 480])
Inference backend: om
Inference backend config:
ultra_infer/runtime/backends/om/om_backend.cc(64)::Init omModelPath = PP-OCRv4_mobile_det_infer_om_910B/inference.om
ultra_infer/runtime/backends/om/om_backend.cc(258)::LoadModel load model PP-OCRv4_mobile_det_infer_om_910B/inference.om success
[INFO] ultra_infer/runtime/runtime.cc(414)::CreateOMBackend Runtime initialized with Backend::OMONNPU in Device::ASCEND.

output = model.predict("general_ocr_002.png")
for res in output:
... res.print(json_format=False)
... res.save_to_img("./output/")
... res.save_to_json("./output/res.json")
...
卡死在这里,rec模型可以正常输出。
硬件:910B4

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions