jetson xavier nx developer kit + docker container + torch_tensorrt
# CLI Demo on Jetson Xavier NX
---
### Install boot image
- Note: Follow these instructions on your host machine.
- Download boot image
- You can use either way:
- Using wget:
- `wget https://developer.nvidia.com/jetson-nx-developer-kit-sd-card-image`
- `mv jetson-nx-developer-kit-sd-card-image JP502-xnx-sd-card-image-b231.zip`
- The name of the image we used is `JP502-xnx-sd-card-image-b231.zip`
- Follow https://developer.nvidia.com/embedded/learn/get-started-jetson-xavier-nx-devkit#write to write the image. Below instructions are based on this site.
- You can write an image using Etcher (a graphical program) or via command line.
- Using Etcher:
- Download Etcher:
- Visit https://www.balena.io/etcher to download an .AppImage file (Ubuntu)
- Or, use wget:
- `wget https://github.com/balena-io/etcher/releases/download/v1.13.3/balenaEtcher-1.13.3-x64.AppImage?d_id=2d1af8c4-f514-497a-8be5-00462f904c31&s_id=1674709720688`
- `mv balenaEtcher-1.13.3-x64.AppImage\?d_id\=2d1af8c4-f514-497a-8be5-00462f904c31 balenaEtcher-1.13.3-x64.AppImage`
- Change permisson of the file:
- Right-click the file, check `Allow executing file as program` at the `Permissions` tab
- Or, `chmod 775 balenaEtcher-1.13.3-x64.AppImage`
- Double-click the file to run as a program
- Write the image you previously download to a SD card (see https://developer.nvidia.com/embedded/learn/get-started-jetson-xavier-nx-devkit#write if needed)
- (Optional) Use the SD card as storage -> See next part
- Eject the card
- Using command line:
- Note: this method was not tested
- Check where your SD card had assigned: `dmesg | tail | awk '$3 == "sd" {print}'`
- For example, `/dev/sda`
- Write the boot image to the card: `/usr/bin/unzip -p ~/Downloads/jetson_nano_devkit_sd_card.zip | sudo /bin/dd of=/dev/sda bs=1M status=progress`
- (Optional) Use the SD card as storage -> See next part
- Eject the card: `sudo eject /dev/sda`
### Optioanl: Assign a new partition to the SD card
- You can use your booting SD card as a storage, without additional m.2 SSD
- Note: Neither way is verified. Please report error if needed.
- Grapical based GParted method and CLI method is available
- GParted:
- Run GParted
- Select SD card
- Right-click unallocated part, click `New` to create a new partition
- Click `Apply All Operations` under the `Edit` tab to apply changes
- CLI:
- Run parted: `sudo parted /dev/sda`
- Type `p` to check disk partitions
- If you get warning like `Warning: Not all of the space available to /dev/sdX appears to be used, you can fix the GPT to use all of the space (an extra 237504512) or continue with the current setting? Fix/Ignore?`, type `Fix` to fix it and type `p` again
- Check the partition number that its name is `APP` and the file system is `ext4` (for example, `1`)
- Resize the partition: `resizepart 1 -1`
- Exit parted: `quit`
- Resize filesystem: `sudo resize2fs /dev/sda1`
### Set up Jetson
- Note: Follow these instructions on the Jetson device.
- Insert the SD card to the Jetson device and power it
- (Optional) Install ssh server for remote installation & remote test
- `apt update`
- `apt install openssh-server`
- `echo 'PermitRootLogin yes' >> /etc/ssh/sshd_config`
- Change password via `passwd`
- `sudo service ssh restart`
- Check your device IP address and connect remotely from your host machine
- (Optional) Install tmux for convenience
- `apt update`
- `apt install tmux -y`
- `tmux new`
- Create docker container:
- `docker run -it --runtime nvidia -e DISPLAY=$DISPLAY -v /tmp/.X11-unix -v [YOUR_MOUNT_OPTION] -v /usr/bin/tegrastats:/usr/bin/tegrastats --name [YOUR_CONTAINER_NAME] nvcr.io/nvidia/l4t-pytorch:r34.1.1-pth1.11-py3`
- Note: set your own mounting point and container name
- Mounting `tegrastats` is for monitoring resources, since `nvidia-smi` is unavailable
- After the creation is completed, you'll be inside of the container automatically
### Set up the container for demo
- Install packages
- `apt update`
- `apt install openjdk-11-jdk curl zip unzip nano git -y`
- `pip3 install cuda-python `
- Check if pytorch cuda is working and other packages are importable:
- `python3`
- `import torch; import tensorrt`
- `torch.cuda().is_available(); conv = torch.nn.Conv2d(3, 3, 3).cuda()`
- `conv(torch.randn(2, 3, 4, 4).cuda())`
### Install Pytorch-TensorRT
- Note: Since this repository uses torch->onnx->tensorrt, Pytorch-TensorRT is not a necessary package. However, we record the way to install it.
- Clone Pytorch-TensorRT GitHub repository:
- Make sure you're at root (`/`)
- `git clone -b v1.1.0 https://github.com/pytorch/TensorRT.git`
- Install Bazel
- `export BAZEL_VERSION=$(cat TensorRT/.bazelversion)`
- `mkdir bazel; cd bazel`
- `curl -fSsL -O https://github.com/bazelbuild/bazel/releases/download/$BAZEL_VERSION/bazel-$BAZEL_VERSION-dist.zip`
- `unzip bazel-$BAZEL_VERSION-dist.zip`
- `bash compile.sh`
- This step takes some time
- `cp output/bazel /usr/local/bin/`
- Install Pytorch-TensorRT
- `cd ../TensorRT`
- Edit `WORKSPACE` file (refer to https://pytorch.org/TensorRT/tutorials/installation.html):
- See https://github.com/pytorch/TensorRT/blob/main/WORKSPACE for the original file
```
workspace(name = "Torch-TensorRT")
load("@bazel_tools//tools/build_defs/repo:git.bzl", "git_repository")
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
http_archive(
name = "rules_pkg",
sha256 = "038f1caa773a7e35b3663865ffb003169c6a71dc995e39bf4815792f385d837d",
urls = [
"https://mirror.bazel.build/github.com/bazelbuild/rules_pkg/releases/download/0.4.0/rules_pkg-0.4.0.tar.gz",
"https://github.com/bazelbuild/rules_pkg/releases/download/0.4.0/rules_pkg-0.4.0.tar.gz",
],
)
load("@rules_pkg//:deps.bzl", "rules_pkg_dependencies")
rules_pkg_dependencies()
git_repository(
name = "googletest",
commit = "703bd9caab50b139428cea1aaff9974ebee5742e",
remote = "https://github.com/google/googletest",
shallow_since = "1570114335 -0400",
)
# External dependency for torch_tensorrt if you already have precompiled binaries.
local_repository(
name = "torch_tensorrt",
path = "/opt/conda/lib/python3.8/site-packages/torch_tensorrt",
)
# CUDA should be installed on the system locally
new_local_repository(
name = "cuda",
build_file = "@//third_party/cuda:BUILD",
path = "/usr/local/cud11/",
)
new_local_repository(
name = "cublas",
build_file = "@//third_party/cublas:BUILD",
path = "/usr",
)
new_local_repository(
name = "libtorch",
path = "/usr/local/lib/python3.8/dist-packages/torch",
build_file = "third_party/libtorch/BUILD"
)
new_local_repository(
name = "libtorch_pre_cxx11_abi",
path = "/usr/local/lib/python3.8/dist-packages/torch",
build_file = "third_party/libtorch/BUILD"
)
new_local_repository(
name = "cudnn",
path = "/usr/",
build_file = "@//third_party/cudnn/local:BUILD"
)
new_local_repository(
name = "tensorrt",
path = "/usr/",
build_file = "@//third_party/tensorrt/local:BUILD"
)
```
- Compile C++ files: `bazel build //:libtorchtrt --platforms //toolchains:jetpack_4.6`
- This step takes some time
- Set up python packages:
- You must restrict CPU cores
- `cd py; nano setup.py`
- Find the definition of function `build_libtorchtrt_pre_cxx11_abi` (3rd function from the start)
- Insert `cmd.append("--jobs=1")` after `cmd.append("//:libtorchtrt")`
- You can also limit cores at the terminal level, check the env variable among printed information after calling `python3 setup.py`
- `python3 setup.py install --use-cxx11-abi --jetpack-version 4.6`
- This step takes some time
댓글
댓글 쓰기