Skip to content

❓ [Question] What am I missing to install TensorRT v1.1.0 in a Jetson with JetPack 4.6 #1076

@mjack3

Description

@mjack3

❓ Question

I am getting some errors trying to install TensorRT v1.1.0 in a Jetson with JetPack 4.6 for using with Python3

What you have already tried

I followed the Official installation of Pytorch v1.10.0 by using binaries according to the offical Nvidia Forum. Then, I followed the official steps of this repository which are:

  1. Install Bazel - successfully
  2. Build Natively on aarch64 (Jetson) - Here I am getting the problem

Environment

  • PyTorch Version :1.10.0
  • OS (e.g., Linux): Ubuntu 18.04
  • How you installed PyTorch: Using pip3 according to Nvidia Forum
  • Python version: 3.6
  • CUDA version: 10.2
  • TensorRT version: 8.2.1.8
  • CUDNN version: 8.2.0.1
  • GPU models and configuration: Jetson NX with JetPack 4.6
  • Any other relevant information: Installation is clean. I am using the CUDA and TensorRT that come flashed wich JetPack.

Additional context

Starting from a clean installation of JetPack and Torch 1.10.0 installed by using official binaries, I describe the installation steps I did for using this repository with the errors I am getting.

1- Install Bazel

git clone -b v1.1.0 https://github.com/pytorch/TensorRT.git
sudo apt-get install openjdk-11-jdk
export BAZEL_VERSION=$(cat /home/tkh/TensorRT.bazelversion)
mkdir bazel
cd bazel
curl -fSsL -O https://github.com/bazelbuild/bazel/releases/download/$BAZEL_VERSION/bazel-$BAZEL_VERSION-dist.zip
unzip bazel-$BAZEL_VERSION-dist.zip
bash ./compile.sh
cp output/bazel /usr/local/bin/

At this point I can see bazel 5.1.1- (@non-git) with bazel --version.

2- Build Natively on aarch64 (Jetson)

Then, I modified my WORKSPACE file of this repository in this way

workspace(name = "Torch-TensorRT")

load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
load("@bazel_tools//tools/build_defs/repo:git.bzl", "git_repository")

http_archive(
    name = "rules_python",
    sha256 = "778197e26c5fbeb07ac2a2c5ae405b30f6cb7ad1f5510ea6fdac03bded96cc6f",
    url = "https://github.com/bazelbuild/rules_python/releases/download/0.2.0/rules_python-0.2.0.tar.gz",
)

load("@rules_python//python:pip.bzl", "pip_install")

http_archive(
    name = "rules_pkg",
    sha256 = "038f1caa773a7e35b3663865ffb003169c6a71dc995e39bf4815792f385d837d",
    urls = [
        "https://mirror.bazel.build/github.com/bazelbuild/rules_pkg/releases/download/0.4.0/rules_pkg-0.4.0.tar.gz",
        "https://github.com/bazelbuild/rules_pkg/releases/download/0.4.0/rules_pkg-0.4.0.tar.gz",
    ],
)

load("@rules_pkg//:deps.bzl", "rules_pkg_dependencies")

rules_pkg_dependencies()

git_repository(
    name = "googletest",
    commit = "703bd9caab50b139428cea1aaff9974ebee5742e",
    remote = "https://github.com/google/googletest",
    shallow_since = "1570114335 -0400",
)

# External dependency for torch_tensorrt if you already have precompiled binaries.
local_repository(
    name = "torch_tensorrt",
    path = "/opt/conda/lib/python3.8/site-packages/torch_tensorrt"
)

# CUDA should be installed on the system locally
new_local_repository(
    name = "cuda",
    build_file = "@//third_party/cuda:BUILD",
    path = "/usr/local/cuda-10.2/",
)

new_local_repository(
    name = "cublas",
    build_file = "@//third_party/cublas:BUILD",
    path = "/usr",
)
#############################################################################################################
# Tarballs and fetched dependencies (default - use in cases when building from precompiled bin and tarballs)
#############################################################################################################


####################################################################################
# Locally installed dependencies (use in cases of custom dependencies or aarch64)
####################################################################################

# NOTE: In the case you are using just the pre-cxx11-abi path or just the cxx11 abi path
# with your local libtorch, just point deps at the same path to satisfy bazel.

# NOTE: NVIDIA's aarch64 PyTorch (python) wheel file uses the CXX11 ABI unlike PyTorch's standard
# x86_64 python distribution. If using NVIDIA's version just point to the root of the package
# for both versions here and do not use --config=pre-cxx11-abi

new_local_repository(
    name = "libtorch",
    path = "/home/tkh-ad/.local/lib/python3.6/site-packages/torch",
    build_file = "third_party/libtorch/BUILD"
)

new_local_repository(
    name = "libtorch_pre_cxx11_abi",
    path = "/home/tkh-ad/.local/lib/python3.6/site-packages/torch",
    build_file = "third_party/libtorch/BUILD"
)

new_local_repository(
    name = "cudnn",
    path = "/usr/local/cuda-10.2/",
    build_file = "@//third_party/cudnn/local:BUILD"
)

new_local_repository(
   name = "tensorrt",
   path = "/usr/",
   build_file = "@//third_party/tensorrt/local:BUILD"
)

# #########################################################################
# # Testing Dependencies (optional - comment out on aarch64)
# #########################################################################
#pip_install(
#    name = "torch_tensorrt_py_deps",
#    requirements = "//py:requirements.txt",
#)

#pip_install(
#    name = "py_test_deps",
#    requirements = "//tests/py:requirements.txt",
#)

#pip_install(
#    name = "pylinter_deps",
#    requirements = "//tools/linter:requirements.txt",
#)

Finally, when I launch python3 py/setup.py install --use-cxx11-abi the error happend

py/setup.py:52: UserWarning: Assuming jetpack version to be 4.6, if not use the --jetpack-version option
  warnings.warn("Assuming jetpack version to be 4.6, if not use the --jetpack-version option")
/usr/lib/python3.6/distutils/dist.py:261: UserWarning: Unknown distribution option: 'long_description_content_type'
  warnings.warn(msg)
running install
Jetpack version: 4.6
building libtorchtrt
INFO: Build options --compilation_mode, --cxxopt, --define, and 1 more have changed, discarding analysis cache.
INFO: Analyzed target //:libtorchtrt (1 packages loaded, 3226 targets configured).
INFO: Found 1 target...
ERROR: /home/tkh-ad/.cache/bazel/_bazel_tkh-ad/9449ca3ce00d402b1ed43474fe8a8b7c/external/cudnn/BUILD.bazel:18:11: Middleman _middlemen/@cudnn_S_S_Ccudnn_Uheaders-BazelCppSemantics_build_arch_aarch64-opt failed: missing input file 'external/cudnn/include/cudnn.h', owner: '@cudnn//:include/cudnn.h'
ERROR: /home/tkh-ad/.cache/bazel/_bazel_tkh-ad/9449ca3ce00d402b1ed43474fe8a8b7c/external/cudnn/BUILD.bazel:18:11: Middleman _middlemen/@cudnn_S_S_Ccudnn_Uheaders-BazelCppSemantics_build_arch_aarch64-opt failed: 1 input file(s) do not exist
Target //:libtorchtrt failed to build
Use --verbose_failures to see the command lines of failed build steps.
ERROR: /home/tkh-ad/.cache/bazel/_bazel_tkh-ad/9449ca3ce00d402b1ed43474fe8a8b7c/external/cudnn/BUILD.bazel:18:11 Middleman _middlemen/@cudnn_S_S_Ccudnn_Uheaders-BazelCppSemantics_build_arch_aarch64-opt failed: 1 input file(s) do not exist
INFO: Elapsed time: 1.866s, Critical Path: 0.02s
INFO: 2 processes: 2 internal.
FAILED: Build did NOT complete successfully

Note1: I installed TensorRT v1.0.0 successfully a week ago and now I am stuck trying to install in another Jetson.
Note2: I am getting the same error trying to install TensorRT v.1.0.0. Maybe is by the Bazel..

Metadata

Metadata

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions