Basic Algorithm Environment Configuration for Nvidia Jetson

Word count: 740, reading time approximately 4 minutes

After installing the device system, configure libraries such as CUDA, cuDNN, OpenCV, and TensorRT through JetPack.

1. Install Conda

Miniconda download link: <span>https://repo.anaconda.com/miniconda/</span>, choose the appropriate version to download, for example, <span>Miniconda3-py38_23.11.0-2-Linux-aarch64.sh</span>

  1. 1. Installation command
cd /root
wget -q https://repo.anaconda.com/miniconda/Miniconda3-py38_23.5.2-0-Linux-aarch64.sh
bash ./Miniconda3-py38_23.11.0-2-Linux-aarch64.sh -b -f -p /root/miniconda3
rm -f ./Miniconda3-py38_23.11.0-2-Linux-aarch64.sh

Parameter explanation:

-b: batch mode, runs non-interactively, using default answers for all questions.
-f: force mode, forces installation even if the target directory already exists.
-p /root/miniconda3: specifies the installation path as `/root/miniconda3`.
  1. 2. Configure environment variables
echo "PATH=/root/miniconda3/bin:/usr/local/bin:$PATH" >> /etc/profile
echo "source /etc/profile" >> /root/.bashrc
# Initialize miniconda
conda init

Parameter explanation:

# Add the Miniconda executable directory to the system PATH environment variable
/root/miniconda3/bin: Miniconda's binary directory;
/usr/local/bin: system's local binary directory;
$PATH: retains the existing PATH environment variable;
>>: appends to the end of the file;
/etc/profile: system-level environment variable configuration file, effective for all users.

# Make environment variables effective upon user login
source /etc/profile: loads the environment variable settings from the /etc/profile file;
>>: appends to the end of the file;
/root/.bashrc: bash configuration file for the root user, executed automatically at each login.
  1. 3. Create a new environment on the base environment, using python3.8, activate and enter.
conda create -n pytorch_gpu python=3.8
conda activate pytorch_gpu

2. Install GPU version of PyTorch

Download link for the Jetson-specific PyTorch installation package: https://forums.developer.nvidia.com/t/pytorch-for-jetson/72048

  1. 1. Install toolchain
sudo apt-get install python3-pip libopenblas-base libopenmpi-dev libomp-dev
pip install Cython==0.29.21
pip install numpy
  1. 2. Install torch
pip install /path/to/torch-2.1.0a0+41361538.nv23.06-cp38-cp38-linux_aarch64.whl

3. Install torchvision

The GPU version of torchvision on Jetson needs to be compiled and installed manually. The version correspondence between PyTorch and torchvision is as follows:

PyTorch v1.8 - torchvision v0.9.0
PyTorch v1.9 - torchvision v0.10.0
PyTorch v1.10 - torchvision v0.11.1
PyTorch v1.11 - torchvision v0.12.0
PyTorch v1.12 - torchvision v0.13.0
PyTorch v1.13 - torchvision v0.13.0
PyTorch v1.14 - torchvision v0.14.1
PyTorch v2.0 - torchvision v0.15.1
PyTorch v2.1 - torchvision v0.16.1
PyTorch v2.2 - torchvision v0.17.1
  1. 1. Install toolchain
sudo apt-get install libjpeg-dev zlib1g-dev libpython3-dev libopenblas-dev libavcodec-dev libavformat-dev libswscale-dev
  1. 2. Install torchvision
git clone --branch v0.16.1 https://github.com/pytorch/vision torchvision
cd torchvision
export BUILD_VERSION=0.16.1  # v0.16.1 is the version of torchvision
python3 setup.py install --user

The compilation and installation process is slow and may encounter errors, which need to be analyzed on a case-by-case basis.

4. Verify installation results

Run the test script to verify the installation results.

import torch
print('CUDA available: ' + str(torch.cuda.is_available()))
print('cuDNN version: ' + str(torch.backends.cudnn.version()))
a = torch.tensor([0., 0.], dtype=torch.float32, device='cuda')
print('Tensor a =', a)
b = torch.randn(2, device='cuda')
print('Tensor b =', b)
c = a + b
print('Tensor c =', c)

import torchvision
print(torchvision.__version__)

Output as follows:

CUDA available: True
cuDNN version: 8600
Tensor a = tensor([0., 0.], device='cuda:0')
Tensor b = tensor([ 0.4206, -1.0542], device='cuda:0')
Tensor c = tensor([ 0.4206, -1.0542], device='cuda:0')
0.16.1

If the project requires the GPU version of onnxruntime, go to Jetson Zoo: <span>https://elinux.org/Jetson_Zoo#PyTorch_.28Caffe2.29/</span>, download the corresponding JetsonPack version of the onnxruntime-gpu installation package.

Download the onnxruntime 1.11.0 suitable for python3.8, then execute the installation command:

pip install /path/to/onnxruntime_gpu-1.11.0-cp38-cp38-linux_aarch64.whl

Thus, the basic environment configuration is complete.

-END-
If you find this article useful, feel free to like, share, bookmark, comment, and recommend ❤
Welcome to follow our public account and join the group to discuss technical issues!
-Previous Recommended Articles-
LINUX/AMD64 – Basic Algorithm Environment Configuration with Docker
Jetson Orin NX – Deploying Algorithm Models Based on Docker
Overview of Software and Hardware Decoding of Jetson/GPU Video Streams Based on FFmpeg
NVIDIA Driver Installation

Leave a Comment