PaddleX/latest/practical_tutorials/high_performance_npu_tutorial #4783
Replies: 1 comment
-
|
我参照上述教程进行OCR产线的OM后端推理,发现速度很慢,如果我注释掉 for res in output: 发现速度很快,但是直接打印output是一个<generator object AutoParallelSimpleInferencePipeline.predict at 0xfffe6453ceb0>。 import os 仍然无法改善性能,请问我应该如何做?我使用的paddle包相关版本如下所示: paddle-custom-npu 3.2.0 |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
PaddleX/latest/practical_tutorials/high_performance_npu_tutorial
https://paddlepaddle.github.io/PaddleX/latest/practical_tutorials/high_performance_npu_tutorial.html
Beta Was this translation helpful? Give feedback.
All reactions