-
Notifications
You must be signed in to change notification settings - Fork 376
Description
❓ Question
I optimized a pytorch module with torch-TensorRT. How can I move the engine to a Jetson?
What you have already tried
I tried torch.jit.load('trt_traced_model.ts')
but get torch.torch.classes.tensorrt.Engine error
Environment
Build information about Torch-TensorRT can be found by turning on debug messages
- PyTorch Version (e.g., 1.0): 10.0
- OS (e.g., Linux): ARM Ubuntu 18
- How you installed PyTorch : pip from offical Nvidia support
- Python version: 3.6
- CUDA version: 10.2
- GPU models and configuration: Jetson NX
Additional context
I have a Jetson NX system with jetpack 4.6, torch v0.10.0 and torchvision v0.11.0 where I want to deploy a tensorRT model.
For that in my main computer I installed this repository and converted my model to tensorRT successfully. I need to move it into the Jetson for production.
This is the code that I use to export to tensorRT (main computer)
model.cuda().eval()
model = torch.jit.trace(model, [torch.rand(1, 3, 224, 224).cuda()])
trt_model_fp32 = torch_tensorrt.compile(model,
inputs=[torch_tensorrt.Input((1, 3, 224, 224))],
enabled_precisions=torch.float32, # Run with FP32
)
torch.jit.save(trt_model_fp32, dir)
This is in my Jetson
model = torch.jit.load(dir)
but i get torch.torch.classes.tensorrt.Engine error
Jetson hasn't installed torch-tensorRT. How can I move the tensorRT model? Do I need to install this repo also in the Jetson?
Thanks!