首页 Paddle Inference 帖子详情
pyinstaller打包paddle.inference推理部署的时候出错
收藏
快速回复
Paddle Inference 问答部署推理Linux 330 1
pyinstaller打包paddle.inference推理部署的时候出错
收藏
快速回复
Paddle Inference 问答部署推理Linux 330 1

ubuntu22.04下,跟着一个教程写的分割模型,然后跟着流程走下来,到进行部署的时候,直接python运行的时候有些警告信息,但总体正常。但当我用pyinstaller打包的时候,就出问题了,

# 代码参考:PaddleSeg/deploy/python/infer.py
import os
import numpy as np
import time
import cv2

from paddle.inference import create_predictor
from paddle.inference import PrecisionType
from paddle.inference import Config as PredictConfig

from paddleseg.deploy.infer import DeployConfig
from paddleseg.utils import get_image_list,logger
from paddleseg.utils.visualize import get_pseudo_color_map,visualize,get_color_map_list


def use_auto_tune(det):
    return hasattr(PredictConfig, "collect_shape_range_info") \
        and hasattr(PredictConfig, "enable_tuned_tensorrt_dynamic_shape") \
        and det.device == "gpu" and det.use_trt and det.enable_auto_tune


def auto_tune(args, imgs, img_nums):
    """
    Use images to auto tune the dynamic shape for trt sub graph.
    The tuned shape saved in args.auto_tuned_shape_file.

    Args:
        args(dict): input args.
        imgs(str, list[str], numpy): the path for images or the origin images.
        img_nums(int): the nums of images used for auto tune.
    Returns:
        None
    """
    logger.info("Auto tune the dynamic shape for GPU TRT.")

    assert use_auto_tune(args), "Do not support auto_tune, which requires " \
        "device==gpu && use_trt==True && paddle >= 2.2"

    if not isinstance(imgs, (list, tuple)):
        imgs = [imgs]
    num = min(len(imgs), img_nums)

    cfg = DeployConfig(args.cfg)
    pred_cfg = PredictConfig(cfg.model, cfg.params)
    pred_cfg.enable_use_gpu(100, 0)
    if not args.print_detail:
        pred_cfg.disable_glog_info()
    pred_cfg.collect_shape_range_info(args.auto_tuned_shape_file)

    # print("create_predictor pred_config: {}".format(pred_cfg))
    predictor = create_predictor(pred_cfg)
    input_names = predictor.get_input_names()
    input_handle = predictor.get_input_handle(input_names[0])

    for i in range(0, num):
        if isinstance(imgs[i], str):
            data = {'img': imgs[i]}
            data = np.array([cfg.transforms(data)['img']])
        else:
            data = imgs[i]
        input_handle.reshape(data.shape)
        input_handle.copy_from_cpu(data)
        try:
            predictor.run()
        except Exception as e:
            logger.info(str(e))
            logger.info(
                "Auto tune failed. Usually, the error is out of GPU memory "
                "for the model or image is too large. \n")
            del predictor
            if os.path.exists(args.auto_tuned_shape_file):
                os.remove(args.auto_tuned_shape_file)
            return

    logger.info("Auto tune success.\n")

#后面还有一些不截了,调用都是类似的,而且好像在import初始化的时候就出错了
#打包命令
pyinstaller -D server.py --paths="/home/cc/.virtualenvs/defect/lib/python3.10/site-packages/:/home/cc/.virtualenvs/defect/lib/python3.10/site-packages/opencv_python.libs" --add-data "models:models"
#运行时出错信息
Traceback (most recent call last):
  File "server.py", line 8, in 
  File "", line 1027, in _find_and_load
  File "", line 1006, in _find_and_load_unlocked
  File "", line 688, in _load_unlocked
  File "PyInstaller/loader/pyimod02_importers.py", line 385, in exec_module
  File "cloud_utils/defectdetection.py", line 8, in 
  File "", line 1027, in _find_and_load
  File "", line 1006, in _find_and_load_unlocked
  File "", line 688, in _load_unlocked
  File "PyInstaller/loader/pyimod02_importers.py", line 385, in exec_module
  File "paddle/__init__.py", line 25, in 
  File "", line 1027, in _find_and_load
  File "", line 1006, in _find_and_load_unlocked
  File "", line 688, in _load_unlocked
  File "PyInstaller/loader/pyimod02_importers.py", line 385, in exec_module
  File "paddle/framework/__init__.py", line 17, in 
  File "", line 1027, in _find_and_load
  File "", line 1006, in _find_and_load_unlocked
  File "", line 688, in _load_unlocked
  File "PyInstaller/loader/pyimod02_importers.py", line 385, in exec_module
  File "paddle/framework/random.py", line 16, in 
  File "", line 1027, in _find_and_load
  File "", line 1006, in _find_and_load_unlocked
  File "", line 688, in _load_unlocked
  File "PyInstaller/loader/pyimod02_importers.py", line 385, in exec_module
  File "paddle/fluid/__init__.py", line 36, in 
  File "", line 1027, in _find_and_load
  File "", line 1006, in _find_and_load_unlocked
  File "", line 688, in _load_unlocked
  File "PyInstaller/loader/pyimod02_importers.py", line 385, in exec_module
  File "paddle/fluid/framework.py", line 37, in 
  File "", line 1027, in _find_and_load
  File "", line 1006, in _find_and_load_unlocked
  File "", line 688, in _load_unlocked
  File "PyInstaller/loader/pyimod02_importers.py", line 385, in exec_module
  File "paddle/fluid/core.py", line 376, in 
  File "paddle/fluid/core.py", line 368, in set_paddle_lib_path
TypeError: sequence item 0: expected str instance, NoneType found
[89525] Failed to execute script 'server' due to unhandled exception!

相关代码python运行没问题,就是打包后运行出问题了。

0
收藏
回复
全部评论(1)
时间顺序
V
Victor_jiang
#2 回复于2023-12

这个错误是由于在设置Paddle库路径时,传入了一个NoneType类型的值。为了解决这个问题,你需要确保在设置Paddle库路径时传入一个字符串类型的值。(猜的,看代码是调用了site.USER_SITE,可能是具体某个环境变量传入的,但是没测试过)

0
回复
需求/bug反馈?一键提issue告诉我们
发现bug?如果您知道修复办法,欢迎提pr直接参与建设飞桨~
在@后输入用户全名并按空格结束,可艾特全站任一用户