Skip to content

Error running service: cannot pickle '_thread.lock' object 请问如何解决? #2015

Open
@eunij-peanut

Description

windows部署报错:

python web_service.py --config=config.yml
args config: {'rpc_port': 18091, 'http_port': 9998, 'worker_num': 1, 'build_dag_each_worker': False, 'dag': {'is_thread_op': True, 'retry': 3, 'use_profile': False, 'tracer': {'interval_s': -1}}, 'op': {'det': {'concurrency': 1, 'local_service_conf': {'client_type': 'local_predictor', 'model_config': './ppocr_det_v4_serving', 'devices': '', 'ir_optim': False}}, 'rec': {'concurrency': 1, 'timeout': 3000, 'retry': 1, 'local_service_conf': {'client_type': 'local_predictor', 'model_config': './ppocr_rec_v4_serving', 'devices': '', 'ir_optim': False}}}}
[DAG] Succ init
I0113 11:20:45.652944 10512 analysis_predictor.cc:1626] MKLDNN is enabled
I0113 11:20:45.652944 10512 analysis_predictor.cc:1740] Ir optimization is turned off, no ir pass will be executed.
e[1me[35m--- Running analysis [ir_graph_build_pass]e[0m
I0113 11:20:45.652944 10512 executor.cc:187] Old Executor is Running.
e[1me[35m--- Running analysis [ir_analysis_pass]e[0m
e[1me[35m--- Running analysis [save_optimized_model_pass]e[0m
e[1me[35m--- Running analysis [ir_params_sync_among_devices_pass]e[0m
e[1me[35m--- Running analysis [adjust_cudnn_workspace_size_pass]e[0m
e[1me[35m--- Running analysis [inference_op_replace_pass]e[0m
e[1me[35m--- Running analysis [memory_optimize_pass]e[0m
I0113 11:20:45.668565 10512 memory_optimize_pass.cc:118] The persistable params in main graph are : 10.2689MB
I0113 11:20:45.684191 10512 memory_optimize_pass.cc:246] Cluster name : linear_170.tmp_1 size: 26500
I0113 11:20:45.684191 10512 memory_optimize_pass.cc:246] Cluster name : shape_5.tmp_0_slice_1 size: 4
I0113 11:20:45.684191 10512 memory_optimize_pass.cc:246] Cluster name : conv2d_198.tmp_1 size: 11520
I0113 11:20:45.684191 10512 memory_optimize_pass.cc:246] Cluster name : linear_170.tmp_0 size: 26500
I0113 11:20:45.684191 10512 memory_optimize_pass.cc:246] Cluster name : pool2d_3.tmp_0_clone_0 size: 1920
I0113 11:20:45.684191 10512 memory_optimize_pass.cc:246] Cluster name : batch_norm_5.tmp_0 size: 1920
I0113 11:20:45.684191 10512 memory_optimize_pass.cc:246] Cluster name : x size: 576
I0113 11:20:45.684191 10512 memory_optimize_pass.cc:246] Cluster name : transpose_44.tmp_0_slice_2 size: 480
I0113 11:20:45.684191 10512 memory_optimize_pass.cc:246] Cluster name : fill_constant_17.tmp_0 size: 4
I0113 11:20:45.684191 10512 memory_optimize_pass.cc:246] Cluster name : fill_constant_19.tmp_0 size: 4
I0113 11:20:45.684191 10512 memory_optimize_pass.cc:246] Cluster name : shape_3.tmp_0_slice_1 size: 4
I0113 11:20:45.684191 10512 memory_optimize_pass.cc:246] Cluster name : reshape2_27.tmp_1 size: 0
e[1me[35m--- Running analysis [ir_graph_to_program_pass]e[0m
I0113 11:20:45.736836 10512 analysis_predictor.cc:1838] ======= optimize end =======
I0113 11:20:45.736836 10512 naive_executor.cc:200] --- skip [feed], feed -> x
I0113 11:20:45.736836 10512 naive_executor.cc:200] --- skip [softmax_11.tmp_0], fetch -> fetch
[OP Object] init success
I0113 11:20:45.762605 10512 analysis_predictor.cc:1626] MKLDNN is enabled
I0113 11:20:45.762605 10512 analysis_predictor.cc:1740] Ir optimization is turned off, no ir pass will be executed.
e[1me[35m--- Running analysis [ir_graph_build_pass]e[0m
e[1me[35m--- Running analysis [ir_analysis_pass]e[0m
e[1me[35m--- Running analysis [save_optimized_model_pass]e[0m
e[1me[35m--- Running analysis [ir_params_sync_among_devices_pass]e[0m
e[1me[35m--- Running analysis [adjust_cudnn_workspace_size_pass]e[0m
e[1me[35m--- Running analysis [inference_op_replace_pass]e[0m
e[1me[35m--- Running analysis [memory_optimize_pass]e[0m
I0113 11:20:45.782716 10512 memory_optimize_pass.cc:118] The persistable params in main graph are : 4.47025MB
I0113 11:20:45.782716 10512 memory_optimize_pass.cc:246] Cluster name : tmp_85 size: 1536
I0113 11:20:45.782716 10512 memory_optimize_pass.cc:246] Cluster name : tmp_21 size: 192
I0113 11:20:45.782716 10512 memory_optimize_pass.cc:246] Cluster name : tmp_94 size: 1536
I0113 11:20:45.782716 10512 memory_optimize_pass.cc:246] Cluster name : tmp_118 size: 96
I0113 11:20:45.782716 10512 memory_optimize_pass.cc:246] Cluster name : relu_1.tmp_0 size: 384
I0113 11:20:45.782716 10512 memory_optimize_pass.cc:246] Cluster name : hardswish_79.tmp_0 size: 1536
I0113 11:20:45.782716 10512 memory_optimize_pass.cc:246] Cluster name : tmp_35 size: 384
I0113 11:20:45.782716 10512 memory_optimize_pass.cc:246] Cluster name : tmp_73 size: 768
I0113 11:20:45.782716 10512 memory_optimize_pass.cc:246] Cluster name : x size: 12
e[1me[35m--- Running analysis [ir_graph_to_program_pass]e[0m
I0113 11:20:45.830435 10512 analysis_predictor.cc:1838] ======= optimize end =======
I0113 11:20:45.830435 10512 naive_executor.cc:200] --- skip [feed], feed -> x
I0113 11:20:45.830435 10512 naive_executor.cc:200] --- skip [sigmoid_0.tmp_0], fetch -> fetch
[OP Object] init success
[PipelineServicer] succ init
Error running service: cannot pickle '_thread.lock' object

当前环境为:windows的anaconda环境下使用pip命令
python 3.8

Name: paddlepaddle
Version: 2.6.2
Name: paddleocr
Version: 2.9.1

使用线程,config.yml如下:
#rpc端口, rpc_port和http_port不允许同时为空。当rpc_port为空且http_port不为空时,会自动将rpc_port设置为http_port+1
rpc_port: 18091

#http端口, rpc_port和http_port不允许同时为空。当rpc_port可用且http_port为空时,不自动生成http_port

原来没有注释掉:http_port: 9998

http_port: 9998
#worker_num, 最大并发数。当build_dag_each_worker=True时, 框架会创建worker_num个进程,每个进程内构建grpcSever和DAG
##当build_dag_each_worker=False时,框架会设置主线程grpc线程池的max_workers=worker_num
#原来为:worker_num: 10
worker_num: 1

#build_dag_each_worker, False,框架在进程内创建一条DAG;True,框架会每个进程内创建多个独立的DAG
#原来为:build_dag_each_worker: False
build_dag_each_worker: False

dag:
#op资源类型, True, 为线程模型;False,为进程模型
#原来为:is_thread_op: False
is_thread_op: False

#重试次数
#原来为:retry: 10
retry: 3

#使用性能分析, True,生成Timeline性能数据,对性能有一定影响;False为不使用
#原来为:use_profile: True
use_profile: False

tracer:
    #原来为:interval_s: 10
    interval_s: -1

op:
det:
#并发数,is_thread_op=True时,为线程并发;否则为进程并发
#原来为:concurrency: 8
concurrency: 1

    #当op配置没有server_endpoints时,从local_service_conf读取本地服务配置
    local_service_conf:
        #client类型,包括brpc, grpc和local_predictor.local_predictor不启动Serving服务,进程内预测
        client_type: local_predictor

        #det模型路径
        #model_config: ./ppocr_det_v3_serving
        model_config: ./ppocr_det_v4_serving

        #Fetch结果列表,以client_config中fetch_var的alias_name为准,不设置默认取全部输出变量
        #fetch_list: ["sigmoid_0.tmp_0"]

        #计算硬件ID,当devices为""或不写时为CPU预测;当devices为"0", "0,1,2"时为GPU预测,表示使用的GPU卡
        #原来为:devices: "0"
        devices: ""
        
        #原来为:ir_optim: True
        ir_optim: False
rec:
    #并发数,is_thread_op=True时,为线程并发;否则为进程并发
    #原来为:concurrency: 4
    concurrency: 1

    #超时时间, 单位ms
    #原来为:timeout: -1
    timeout: 3000

    #Serving交互重试次数,默认不重试
    retry: 1

    #当op配置没有server_endpoints时,从local_service_conf读取本地服务配置
    local_service_conf:

        #client类型,包括brpc, grpc和local_predictor。local_predictor不启动Serving服务,进程内预测
        client_type: local_predictor

        #rec模型路径
        #model_config: ./ppocr_rec_v3_serving
        model_config: ./ppocr_rec_v4_serving

        #Fetch结果列表,以client_config中fetch_var的alias_name为准, 不设置默认取全部输出变量
        #fetch_list: 

        #计算硬件ID,当devices为""或不写时为CPU预测;当devices为"0", "0,1,2"时为GPU预测,表示使用的GPU卡
        #原来为:devices: "0"
        devices: ""
        
        #原来为:ir_optim: True
        ir_optim: False

部署命令运行后报错

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions