-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Description
Checklist:
- 查找历史相关issue寻求解答
- 翻阅FAQ
- 翻阅PaddleX 文档
- 确认bug是否在新版本里还未修复
描述问题
按照官网安装paddle插件
使用镜像:ccr-2vdh3abv-pub.cnc.bj.baidubce.com/device/paddle-npu:cann800-ubuntu20-npu-910b-base-aarch64-gcc84
安装过程:
pip install paddlepaddle -i https://www.paddlepaddle.org.cn/packages/stable/cpu
pip install paddle-custom-npu -i https://www.paddlepaddle.org.cn/packages/stable/npu
pip install paddlex
pip install paddlex[base]
pip install paddlex[ocr]
python -m pip install numpy==1.26.4
python -m pip install opencv-python==3.4.18.65
export LD_PRELOAD=/usr/lib/aarch64-linux-gnu/libgomp.so.1:$LD_PRELOAD
python -c "import paddle_custom_device; paddle_custom_device.npu.version()"
python -c "import paddle; paddle.utils.run_check()"到此,目前仍没有问题
接下来尝试进行paddlex的服务化部署
paddlex --install serving
paddlex --serve --pipeline PP-StructureV3.yaml --host 0.0.0.0 --port 8118 --device npu成功在本地部署
当希望使用高性能插件时报错'npu' is not a supported device type
NPU高性能插件安装参考文档:https://paddlepaddle.github.io/PaddleX/latest/pipeline_deploy/high_performance_inference.html#112
https://paddlepaddle.github.io/PaddleX/latest/practical_tutorials/high_performance_npu_tutorial.html#22
paddlex --install hpi-npu
paddlex --install paddle2onnx
paddlex --serve --pipeline PP-StructureV3.yaml --host 0.0.0.0 --port 8118 --device npu --use_hpip复现
-
高性能推理
- 您是否完全按照高性能推理文档教程跑通了流程?
使用时 --use_hpip 报错
- 您是否完全按照高性能推理文档教程跑通了流程?
-
服务化部署
- 您是否完全按照服务化部署文档教程跑通了流程?
在不使用 --use_hpip 情况下可以部署跑通 - 您在服务化部署中是否有使用高性能推理插件?
有使用,paddlex --install hpi-npu - 您使用了哪一种服务化部署方案?
基础服务化部署 - 如果是多语言调用的问题,请给出调用示例子。
无
- 您是否完全按照服务化部署文档教程跑通了流程?
-
端侧部署
- 您是否完全按照端侧部署文档教程跑通了流程?
无 - 您使用的端侧设备是?对应的PaddlePaddle版本和PaddleLite版本分别是什么?
无
- 您是否完全按照端侧部署文档教程跑通了流程?
-
您使用的模型和数据集是?
PP-StructureV3 -
请提供您出现的报错信息及相关log
/data# paddlex --serve --pipeline PP-StructureV3.yaml --host 0.0.0.0 --port 8118 --device npu --use_hpip
Creating model: ('PP-LCNet_x1_0_doc_ori', None)
Model files already exist. Using cached files. To redownload, please delete the directory manually: `/root/.paddlex/official_models/PP-LCNet_x1_0_doc_ori`.
I1127 15:10:35.030030 20730 init.cc:238] ENV [CUSTOM_DEVICE_ROOT]=/usr/local/lib/python3.10/dist-packages/paddle_custom_device
I1127 15:10:35.030079 20730 init.cc:146] Try loading custom device libs from: [/usr/local/lib/python3.10/dist-packages/paddle_custom_device]
I1127 15:10:35.745096 20730 custom_device_load.cc:51] Succeed in loading custom runtime in lib: /usr/local/lib/python3.10/dist-packages/paddle_custom_device/libpaddle-custom-npu.so
I1127 15:10:35.745155 20730 custom_device_load.cc:58] Skipped lib [/usr/local/lib/python3.10/dist-packages/paddle_custom_device/libpaddle-custom-npu.so]: no custom engine Plugin symbol in this lib.
I1127 15:10:35.748632 20730 custom_kernel.cc:68] Succeed in loading 359 custom kernel(s) from loaded lib(s), will be used like native ones.
I1127 15:10:35.748835 20730 init.cc:158] Finished in LoadCustomDevice with libs_path: [/usr/local/lib/python3.10/dist-packages/paddle_custom_device]
I1127 15:10:35.748883 20730 init.cc:244] CustomDevice: npu, visible devices count: 1
Failed to create the pipeline
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/paddlex/paddlex_cli.py", line 500, in serve
pipeline = create_pipeline(
File "/usr/local/lib/python3.10/dist-packages/paddlex/inference/pipelines/__init__.py", line 167, in create_pipeline
pipeline = BasePipeline.get(pipeline_name)(
File "/usr/local/lib/python3.10/dist-packages/paddlex/utils/deps.py", line 206, in _wrapper
return old_init_func(self, *args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/paddlex/inference/pipelines/_parallel.py", line 103, in __init__
self._pipeline = self._create_internal_pipeline(config, self.device)
File "/usr/local/lib/python3.10/dist-packages/paddlex/inference/pipelines/_parallel.py", line 158, in _create_internal_pipeline
return self._pipeline_cls(
File "/usr/local/lib/python3.10/dist-packages/paddlex/inference/pipelines/layout_parsing/pipeline_v2.py", line 84, in __init__
self.inintial_predictor(config)
File "/usr/local/lib/python3.10/dist-packages/paddlex/inference/pipelines/layout_parsing/pipeline_v2.py", line 134, in inintial_predictor
self.doc_preprocessor_pipeline = self.create_pipeline(
File "/usr/local/lib/python3.10/dist-packages/paddlex/inference/pipelines/base.py", line 140, in create_pipeline
pipeline = create_pipeline(
File "/usr/local/lib/python3.10/dist-packages/paddlex/inference/pipelines/__init__.py", line 167, in create_pipeline
pipeline = BasePipeline.get(pipeline_name)(
File "/usr/local/lib/python3.10/dist-packages/paddlex/utils/deps.py", line 206, in _wrapper
return old_init_func(self, *args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/paddlex/inference/pipelines/_parallel.py", line 103, in __init__
self._pipeline = self._create_internal_pipeline(config, self.device)
File "/usr/local/lib/python3.10/dist-packages/paddlex/inference/pipelines/_parallel.py", line 158, in _create_internal_pipeline
return self._pipeline_cls(
File "/usr/local/lib/python3.10/dist-packages/paddlex/inference/pipelines/doc_preprocessor/pipeline.py", line 69, in __init__
self.doc_ori_classify_model = self.create_model(doc_ori_classify_config)
File "/usr/local/lib/python3.10/dist-packages/paddlex/inference/pipelines/base.py", line 106, in create_model
model = create_predictor(
File "/usr/local/lib/python3.10/dist-packages/paddlex/inference/models/__init__.py", line 84, in create_predictor
return BasePredictor.get(model_name)(
File "/usr/local/lib/python3.10/dist-packages/paddlex/inference/models/image_classification/predictor.py", line 49, in __init__
self.preprocessors, self.infer, self.postprocessors = self._build()
File "/usr/local/lib/python3.10/dist-packages/paddlex/inference/models/image_classification/predictor.py", line 82, in _build
infer = self.create_static_infer()
File "/usr/local/lib/python3.10/dist-packages/paddlex/inference/models/base/predictor/base_predictor.py", line 304, in create_static_infer
return HPInfer(
File "/usr/local/lib/python3.10/dist-packages/paddlex/utils/deps.py", line 156, in _wrapper
return old_init_func(self, *args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/paddlex/inference/models/common/static_infer.py", line 620, in __init__
backend, backend_config = self._determine_backend_and_config()
File "/usr/local/lib/python3.10/dist-packages/paddlex/inference/models/common/static_infer.py", line 675, in _determine_backend_and_config
raise RuntimeError(
RuntimeError: No inference backend and configuration could be suggested. Reason: 'npu' is not a supported device type.环境
-
请提供您使用的PaddlePaddle、PaddleX版本号、Python版本号
paddle-custom-npu 3.2.0
paddle2onnx 2.0.2rc3
paddlepaddle 3.2.2
paddlex 3.3.10
python 3.10.16 -
请提供您使用的操作系统信息,如Linux/Windows/MacOS
Linux
CPU: aarch64
NPU: 910B -
请问您使用的CUDA/cuDNN的版本号是?