Hello everyone, The Jetson Nano is a compact AI computing board released by NVIDIA, with the biggest highlight being its relatively high-end GPU in the embedded field, and it provides application programming interfaces (APIs) for AI and computer vision, which can be directly used in low-power AI application scenarios.
Since its release, the Jetson Nano has been very popular in the open-source hardware community. Of course, there have been many Weibo articles about Jetson Nano online, but people are not very interested in those marketing articles. After all, reading too many exaggerated words can become tiresome. Recently, NVIDIA’s official distributor Seeed Studio asked the Chipboard Workshop editor to conduct an objective review, which led to this article.
As a service platform, the Chipboard Workshop’s reviews tend to be more realistic and objective, and the evaluation of various products is fairer, which is also the reason why Seeed Studio chose the Chipboard Workshop.
This review aims not only to objectively showcase the performance characteristics of the Jetson Nano but also to help everyone eliminate various usage barriers from an introductory perspective.
GPU |
NVIDIA Maxwell™ architecture, 128 NVIDIA CUDA® cores |
CPU |
Quad-core ARM® Cortex®-A57 processor |
Memory |
4 GB 64 bit LPDDR4 |
Storage |
Micro SD card slot (recommended 32GB or more SD card) |
Video Encoding |
One 4K@30, or four 1080p@30, or two 1080p@60 |
Video Decoding |
One 4K@60, or two 4K@30, or eight 1080p@30 |
Camera |
2 x MIPI CSI–2 or USB camera |
Network |
Gigabit Ethernet |
Display |
HDMI 2.0 or DP1.2 |
High-speed Interface |
1 x PCIe,4x USB 3.0 |
I/O |
UART,2x SPI,3x I2C,I2S,GPIOs |
The Jetson Nano development kit consists of a core board and a carrier board, which are connected via a SO-DIMM interface commonly found in laptop memory.
The package contains a Jetson Nano module with a heatsink, a carrier board (the carrier board and module are pre-assembled), and a quick start guide. The included heatsink is recommended to work under 10W thermal power consumption, and a fan should be added for higher performance. The board has a 4Pin fan interface that supports PWM speed control.
The kit comes in two versions, A02 and B01, with these three characters located at the end of the product number. We received the B01 version. The development kit number can be found on the packaging, and the part number of the carrier board is printed on the back of the carrier board.
The Jetson Nano development kit I received today is from Seeed Studio, and it is the normal version sold in China (and it is made in China), with consistent serial numbers on the back of the carrier board and the packaging. The sample tested this time is consistent with the products you would purchase through normal channels, not a test version, production version, or factory package product.
First of all, the kit does not include components such as a display, TF card, and power supply, which need to be prepared or purchased by the user. I am a special reviewer, and Seeed Studio provided a TF card and power supply for my testing convenience, in addition to the Jetson Nano development kit.
The TF card needs to be 16G or higher capacity and UHS-1 speed or above. I feel that 16GB is somewhat insufficient for storing images and video materials when doing AI applications, so I recommend a SanDisk TF card of 32GB or more (for example, 64GB), which meets the technical specifications. The power supply can be input through a DC interface or MicroUSB interface. I believe that powering through the DC interface is relatively recommended, as it is stable, has low interface resistance, and is cost-effective. It is easy to find a 5V2A DC power supply on the market, and even a higher specification 5V4A is not expensive.
The MicroUSB is more convenient and can use a phone charger for power supply, requiring 5V2A or higher, which can be met by many Android phone chargers from a few years ago. Just make sure to use a high-current MicroUSB cable. I have tested a 1A cable provided by Firefly for Arduino, which can start the Jetson Nano but will prompt for low input voltage, so it is still recommended to use a high-current cable.
Additionally, when using USB power, consider that there may be voltage drop under high current, so it is recommended to use a reputable brand’s genuine charger, and the actual voltage must exceed 4.75V when outputting 2A. If you are concerned about voltage drop resulting in low voltage, you can use a DELL or Lenovo 5.25V tablet charger. Alternatively, you can directly purchase the 5V3A power adapter provided by Seeed Studio.
There is a power supply selection jumper cap on the board, numbered J48. When powering via Micro-USB, ensure that the jumper cap on J48 is disconnected, the Micro-USB power is on, and the DC power is off; when powering via the DC interface, ensure the jumper cap is connected, the Micro-USB power is off, and the DC power is on. The jumper cap on the J48 interface in the photo is in the disconnected state, pay attention to the details!!!
As a side note, I have received many reports from users encountering problems during use; some users feedback that the board does not light up after receiving it, but they have already used a genuine phone charger, which can rule out insufficient power supply. After detailed inquiries about the usage scenario, it turns out that it was just a false alarm, and it was a jumper cap setting issue. I want to emphasize again that everyone must carefully read the user manual on NVIDIA’s Jetson official website (NV_Jetson_Nano_Developer_Kit_User_Guide.pdf), and if you have trouble understanding the English, you can ask Seeed Studio’s online technical support.
Some sellers do not understand the product and do not provide technical support. Once you say the board does not light up, it will be sent back, and two months later, after testing, it is found to be normal and returned, wasting your shipping costs and time. Only reliable sellers can help you avoid these unnecessary setbacks with their rich project experience.
The Jetson Nano supports HDMI and DP interfaces for display. Currently, HDMI displays are quite common on the market. If your display only supports VGA, you need to consider purchasing a new display. If using a TV as a display, most mainstream brand LCD TVs on the market now support both HDMI and DP.
The camera module can use the LI-IMX219-MIPI-FF-NANO camera module recommended by Jetson Nano; it also supports third-party Raspberry Pi’s Pi Camera V2 module. If you already have a Raspberry Pi V2 camera, you can use it directly. The camera connects to the CAM0 and CAM1 interfaces shown in the image below.
The steps for first-time startup are quite simple: 1. Write the original factory image to the TF card; 2. Insert the TF card into the Jetson Nano; 3. Connect the mouse, keyboard, display, and network cable; 4. Power the board through the Micro-USB interface; then it will automatically start.
The burning software can be balenaEtcher or Win32 Disk Imager
Another side note, I have also received feedback from users saying that the SanDisk TF card is “not recognized.” In this case, it is very likely that the card is normal. Often, the actual problem you encounter is more accurately described as “the card is not mounted,” rather than “the card is not recognized.” If you are using a Windows system, please open Disk Management, and you should see this TF card with partitions inside. The reason for the inability to mount is quite simple because Windows does not recognize the Linux ext4 partition; in fact, the card is normal. I remind everyone again, if you encounter problems, be sure to ask the Chipboard Workshop editor to avoid blindly sending it back to the manufacturer, which would waste shipping costs if the product is found to be normal after inspection.
The wiring can refer to the animated image in the link below
https://developer.nvidia.com/sites/default/files/akamai/embedded/images/jetsonNano/gettingStarted/Jetbot_animation_500x282_2.gif
The first startup will ask you to enter initial configuration information, mainly keyboard layout, username, and password, which is similar to the usage of a normal desktop Ubuntu system.
Considering that most phones have upgraded to USB Type-C interfaces, if you really do not have a Micro-USB power supply cable on hand, you can consider powering through the DC interface. The DC interface size is 5.5mm outer diameter and 2.1mm inner diameter; you can use the original DC power supply or compatible domestic DC power supply with a 2.1mm spring port.
There is a set of 40Pin header interfaces on the board, which includes two 3.3V output pins and two 5V pins. The 5V pins can serve as 5V output or as 5V power input for the board. If you have female Dupont wires, you can DIY the power supply line from this interface; each 5V pin supports a maximum input of 2.5A, and using two groups can achieve 5V5A high-power input, which is advantageous when carrying a large load. I recommend experienced users to power through this solution (new users should not DIY power supply to avoid damage due to incorrect voltage parameters or wrong connections). The Chipboard Workshop’s Taobao assistant has prepared compatible domestic 5V4A or higher DC power supplies for beginners, which are reliable in quality and low in price.
On the Linux system, the most commonly used remote command line service is Open SSH, and the official system of Jetson Nano has already enabled SSH service, which can be logged in using putty software on the local area network. If you want to copy files to Jetson Nano or modify files, you can use winscp software. If your upstream router has enabled DHCP service and the Jetson Nano’s network port is connected to the router’s LAN port, the router will automatically assign an IP address to Jetson.
By default, only ordinary users can log in; you can set the root password after logging in. The command is:
sudo passwd
During password input, nothing is displayed; this is normal to prevent password snooping.
The SSH service configuration file is located at /etc/ssh/sshd_config
This file belongs to the root user. If an ordinary user wants to modify it, they first need to set read and write permissions with the command:
chmod 666 /etc/ssh/sshd_config
Next, for example, if we want to allow the root user to log in remotely via SSH, we can modify the value of PermitRootLogin in the /etc/ssh/sshd_config file to yes
To apply the settings, you need to restart the SSH service with the command:
sudo service ssh restart
Then you can try logging in with the root user, and it should work fine.
There are many software options for enabling remote desktop, such as tiger vnc, vnc4server, x11vnc, etc. If you want to remotely view graphical applications, it is recommended to use x11vnc. x11vnc directly transmits the local desktop image and can display OpenGL applications normally. The commands to install and start the VNC service are as follows:
sudo apt-get install x11vnc
x11vnc -storepasswd
x11vnc
Properly setting up Swap (virtual memory) can improve system performance. The memory occupied by inactive processes can be placed into virtual memory, freeing up more physical memory space. Virtual memory can be provided by configuring a Swap partition or Swap file. Among them, the Swap file scheme does not require modifying the partition during configuration, and its performance is similar to that of a Swap partition, making it more flexible in practical applications.
For example, to create a new 2GB virtual memory. First, create a 2GB file in a convenient directory.
sudo dd if=/dev/zero of=/mnt/2GB.swap bs=1024 count=2097148
Format this file
sudo mkswap /mnt/2GB.swap
Then add the formatted file to the system as a Swap file
sudo swapon /mnt/2GB.swap
The default file permission is 644, which will raise a warning. The reason is that other users’ programs may mistakenly read and write the swap partition file, causing errors. This issue is not significant. If you are concerned about errors, you can set it to 600 permission using the chmod command. The screenshots of the above operations are as follows:
The above operations for setting virtual memory will be invalid after reboot. As we know, the /etc/fstab file sets the mounting of the file system. If you want this swap file to be mounted automatically, you can add the following command to the fstab file
/mnt/2GB.swap none swap sw00
The strategy configuration file for using swap space is located at /etc/sysctl.conf
If you want the system to use physical memory as much as possible and minimize the use of virtual memory, add the line vm.swappiness=10 to the above file. If you want the system to use virtual memory as much as possible, set vm.swappiness=100. If you only want to use swap space when physical memory is exhausted, set vm.swappiness=0.
The command to check the current swap space of the system is
sudo swapon -s
If you are using the official system image, the basic development environment JetPack is already integrated. The official image is based on the Ubuntu Bionic Linux distribution, specifically Ubuntu 18.04 LTS. The command to install software on Ubuntu is apt. The ARM64 and ARMhf architectures use the Ubuntu Ports source, and there are fast sources available in China for Ubuntu-ports, so I recommend replacing the original apt configuration document’s Ubuntu overseas source with the USTC source.
Key point: Before replacing the source address, remember to first wrap apt-transport-https with the default source to solve the existing chicken or egg problem (don’t change to https source and find that you cannot download). Due to the widespread use of caching hijacking technology by domestic broadband operators to improve network performance, this technology can be a double-edged sword, as it may cause you to download outdated files from the cache server when updating software through HTTP sites, leading to serious errors in software dependency relationships. I strongly recommend using HTTPS to avoid cache hijacking.
sudo apt-get update
sudo apt-get install apt-transport-https
Then edit the /etc/apt/sources.list file and change the software source address to the USTC Ubuntu Ports source address https://mirrors.ustc.edu.cn/ubuntu-ports/
The details of the apt installation source settings are marked in red in the following image:
The official image integrates the JetPack development environment, which already includes libraries such as TensorRT, cuDNN, CUDA, multimedia API libraries, OpenCV, etc. The paths for their examples are in the /usr/src directory. The JetPack development environment also integrates some necessary development and debugging tools, such as Nsight Eclipse Edition for GPU-accelerated applications and CUDA-GDB for application debugging; as well as performance analysis tools such as Nsight Systems, nvprof, and Visual Profiler.
5.2 Installing pip3 and TensorFlow (optional)
As mentioned earlier, JetPack already contains the basic development environment and libraries such as TensorRT.
Python3 is already installed, and pip can be easily installed.
apt-get install python3-pip python3-dev
JetPack does not include common software packages like hdf5, which can be installed using the command below (these are all dependencies for TensorFlow)
apt-get install libhdf5-serial-dev hdf5-tools libhdf5-dev zlib1g-dev zip libjpeg8-dev liblapack-dev libblas-dev gfortran
When using pip install to install software, if the speed is slow, you can add the -i option to use the domestic Aliyun pip source address, which can speed things up. For example, to install scrapy:
python3 -m pip install scrapy -i https://mirrors.aliyun.com/pypi/simple/
Upgrade pip3:sudo pip3 install -U pip testresources setuptools
Check the pip version:
If the pip3 install command always fails and prompts Requirement already satisfied, you need to specify the installation path, which refers to the path output by the pip version command.
For our Jetson Nano, you can add the specified installation path parameter to the pip3 install command:
–target=/usr/local/lib/python3.6/dist-packages
For example, to install numpy, you can use the command:
pip3 install numpy –target=/usr/lib/python3/dist-packages
Installing TensorFlow on Jetson Nano also requires a series of Python dependencies, which can be installed together with one command (if installation fails, break it down and install one by one)
pip3 install -U numpy==1.16.1 future==0.17.1 mock==3.0.5 h5py==2.9.0 keras_preprocessing==1.0.5 keras_applications==1.0.8 gast==0.2.2 futures protobuf pybind11
If it prompts that the installation of cryptography failed, you can use the apt method to install it
apt-get install python3-cryptography
Finally, go to the NVIDIA website to downloadtensorflow_gpu-1.14.0+nv19.10-cp36-cp36m-linux_aarch64.whl file. Then install it with the command:
pip3 install tensorflow_gpu-1.14.0+nv19.10-cp36-cp36m-linux_aarch64.whl
I also found that the installation process of TensorFlow can be quite challenging. Relatively speaking, the later jetson-inference library is easier to install.
For specific tutorials on installing TensorFlow, see:
https://docs.nvidia.com/deeplearning/frameworks/install-tf-jetson-platform/index.html
NVIDIA provides some convenient DEMOs for everyone to get started. I spent about two hours doing a preliminary test and tried several DEMOs from the Hello AI project (the GitHub address will be mentioned later), mainly inference operations, so the speed is quite fast. These DEMOs include image classification (ImageNet), object detection (DetecNet), image semantic segmentation (SegNet), and real-time camera processing examples (Live Camera).
6.1 Downloading and Compiling the Jetson-Inference Library
First, install the compilation environment from the Ubuntu source. Also, install git, as it will be needed to download the source code from GitHub. The examples are implemented through Python’s extension modules, in which the C++ project uses the Python C API to provide bindings to the native C++ code. Ubuntu only pre-installs the Python2.7 libpython-dev and python-numpy packages. The libpython-dev and numpy packages for Python3.6 are essential for building bindings using the Python C API, so they need to be installed.
The installation command is as follows:
sudo apt-get install git cmake libpython3-dev python3-numpy
Use the git command to download the source code locally, and you can use the recursive option to download submodules together
git clone –recursive https://github.com/dusty-nv/jetson-inference
If this step fails, it is generally due to speed limits on the GitHub site. Here, I will introduce a domestic alternative to GitHub, called Gitee. It is recommended to fork to your repository and then transfer to Gitee, and then clone the project from Gitee. After that, run git submodule update –init
If you continue to experience failures, it indicates that your speed to GitHub is too slow. If the next-level folder depends on other projects, forking will not automatically copy the contents of the folders. You will have to manually open each folder marked with @ to find the paths on GitHub and transfer them to Gitee one by one.
Configure CMake, and make sure to execute this script with sudo permissions, as it will download the pre-trained neural network files from the internet. The files are quite large, so ensure your network is smooth and that your TF card has sufficient remaining capacity when executing the script.
cd jetson-inference
mkdir build
cd build
cmake ../
You can optionally install PyTorch. If you need to retrain the network, install it. During CMake, the installation settings for PyTorch will be automatically executed; if you made a mistake, you can manually execute the script below to remedy it
cd jetson-inference/build
./install-pytorch.sh
The compilation process is as expected. Use make and then make install, where the make step can include the -j4 option, and make install requires sudo permissions.
cd jetson-inference/build
make
sudo make install
sudo ldconfig
After installation, you can use Python import to test the installation status, with the test command:
import jetson.inference
import jetson.utils
The test screenshot is as follows:
If Python does not prompt for missing modules, it is normal. Type exit() and press enter to exit Python.
Next, you can run some tests to check the functionality.
6.2 Image Recognition Test
The Python example for image recognition can be found in python/examples/my-recognition.py
The test images are downloaded to the data/images directory during compilation and installation. You can enter the corresponding directory to find them. If the download was not completed due to network issues during installation, you can download them later using the wget command. In this test, we will use several images of bears.
wget https://github.com/dusty-nv/jetson-inference/raw/master/data/images/black_bear.jpg
wget https://github.com/dusty-nv/jetson-inference/raw/master/data/images/brown_bear.jpg
wget https://github.com/dusty-nv/jetson-inference/raw/master/data/images/polar_bear.jpg
To run the demo
./my-recognition.py polar_bear.jpg
./my-recognition brown_bear.jpg
./my-recognition.py black_bear.jpg
The image of the polar bear used for testing is shown below:
The screenshot of the running example with the polar bear image shows that the image recognition works correctly, identifying the polar bear.
This example defaults to using the GoogleNet network. If you want to use a different network, you can add the –network parameter in the command
./my-recognition.py –network=resnet-18 polar_bear.jpg
The test screenshot below shows the correct identification of the polar bear:
NVIDIA provides several pre-trained networks that can be used, which are automatically downloaded during the cmake step in the previous chapter.
If the model files did not download successfully due to network issues, you can download them from the following address:
https://github.com/dusty-nv/jetson-inference/releases
The camera image recognition example can be found in imagenet-camera.py
You can specify which camera to use with –camera. By default, it uses MIPI CSI 0, and the parameter is –camera=0. If using a USB camera, you can specify the parameter as –camera=/dev/video0 or 1, depending on the actual system file name of the camera. The command to check the supported resolutions of the USB camera is:
sudo apt-get install v4l-utils
v4l2-ctl –list-formats-ext
If you want to specify the image resolution, you can add the –width and –height options.
Example commands (run as an administrator):
./imagenet-camera.py#using GoogleNet, default MIPI CSI camera (1280×720)
./imagenet-camera.py –network=resnet-18#using ResNet-18, default MIPI CSI camera (1280×720)
./imagenet-camera.py –camera=/dev/video0#using GoogleNet, V4L2 camera /dev/video0 (1280×720)
./imagenet-camera.py –width=640 –height=480#using GoogleNet, default MIPI CSI camera (640×480)
6.3 Object Detection Test
Next, we will test a camera object detection program. The test code can be found in python/examples/my-detection.py
This code defaults to using a USB camera with a resolution of 1280×720. The default network uses mobilenet ssd V2.
There are several pre-trained networks available to use:
Model |
CLI argument |
NetworkType enum |
Object classes |
SSD-Mobilenet-v1 |
ssd-mobilenet-v1 |
SSD_MOBILENET_V1 |
91 (COCO classes) |
SSD-Mobilenet-v2 |
ssd-mobilenet-v2 |
SSD_MOBILENET_V2 |
91 (COCO classes) |
SSD-Inception-v2 |
ssd-inception-v2 |
SSD_INCEPTION_V2 |
91 (COCO classes) |
DetectNet-COCO-Dog |
coco-dog |
COCO_DOG |
dogs |
DetectNet-COCO-Bottle |
coco-bottle |
COCO_BOTTLE |
bottles |
DetectNet-COCO-Chair |
coco-chair |
COCO_CHAIR |
chairs |
DetectNet-COCO-Airplane |
coco-airplane |
COCO_AIRPLANE |
airplanes |
ped-100 |
pednet |
PEDNET |
pedestrians |
multiped-500 |
multiped |
PEDNET_MULTI |
pedestrians, luggage |
facenet-120 |
facenet |
FACENET |
faces |
To run this test, use the command:
python my-detection.py
This example code can be slightly modified to switch between images and cameras. The example for recognizing existing image files can be found in detectnet-console.py
Example command:
./detectnet-console.py –network=ssd-mobilenet-v2 input.jpg output.jpg
Where the –network parameter should be written according to the actual network you want to use. input.jpg is the filename of the image to be recognized (I used an image of two people playing football), and the test screenshot is shown below:
The output image screenshot shows that it accurately detected the people and the ball in the image:
6.4 Image Semantic Segmentation
Semantic segmentation is based on image recognition and can identify objects at the pixel level. Examples can be found in segnet-console.py and segnet-camera.py
Example command for testing:
./segnet-console.py –network=fcn-resnet18-cityscapes images/city_1.jpg output.jpg
Test screenshot:
The output image is shown below:
You can also switch models to test in different scenarios, for example:
./segnet-console.py –network=fcn-resnet18-deepscene –visualize=mask images/trail_1.jpg output_mask.jpg
Test screenshot:
The input and output images are shown below:
Among them, the –network parameter can be replaced with specific models to achieve recognition of different objects. The following table shows the test results of several models, comparing Jetson Nano and Xavier.
Dataset |
Resolution |
CLI Argument |
Accuracy |
Jetson Nano |
Jetson Xavier |
Cityscapes |
512×256 |
fcn-resnet18-cityscapes-512×256 |
83.3% |
48 FPS |
480 FPS |
Cityscapes |
1024×512 |
fcn-resnet18-cityscapes-1024×512 |
87.3% |
12 FPS |
175 FPS |
In the command, you can specify the model resolution. For example, fcn-resnet18-pascal-voc defaults to 320×320. The running screenshot is as follows
We can specify 512×320 to improve accuracy. Example commands are as follows (using the previous example of two people playing football):
./segnet-console.py –network=fcn-resnet18-pascal-voc-512×320 images/humans_0.jpg output.jpg
The running screenshot is as follows:
The output image screenshot is shown below:
We can see that at a resolution of 320×320, the time taken is 24ms. When using 512×320, although the accuracy improves, the time taken increases to 38ms.
In practical applications, you should choose the type of model based on specific needs. When you need to detect human figures as quickly as possible, it is recommended to use 320×320
The Jetson Nano has a high-performance GPU, which can not only perform calculations but also run games. Among open-source game console emulators, there is a software called PPSSPP.
PPSSPP (PlayStation Portable Simulator Suitable for Playing Portably) is a cross-platform open-source PSP emulator.
This article has been long-awaited by the Chipboard Workshop team. Recently, several open-source handhelds have been released, such as the Old Zhang ZPG, Old Zhou 350, OGA, and RK2020, and players are going wild. Everyone wants to know what the next generation of open-source handhelds will be. Will it be RK3326 or RK3399?
Some friends want to test how Jetson Nano runs PPSSPP, but due to technical issues, they are unable to do so. I studied computer technology at a key university, and my skills are quite advanced. Moreover, I am very willing to help everyone, so this tutorial came about. I also plan to create a free video tutorial on compiling PPSSPP on Jetson Nano on a non-commercial video site.
Today, I will first release the tutorial document to everyone. Firstly, to satisfy everyone’s desire to play games on Jetson Nano; secondly, to use this free tutorial to divert everyone’s interest, so that fans of OGA and RK2020 handhelds stop fighting. Since there are so many good things in the open-source community, don’t waste time on meaningless arguments (this is a concern from the Chipboard Workshop editor).
7.1 Compiling and Installing SDL2
PPSSPP depends on the SDL library, but the libsdl2-dev in the software source will report errors when compiling and installing PPSSPP. You need to compile and install SDL2 from source. Downloading SDL2 from GitHub is quite slow, so you can use Gitee in China. The command is:
git clone https://gitee.com/QiangGeGit/SDL2.git
cd SDL2
mkdir build
cd build
../configure
make
sudo make install
The above commands will download the source code, create a build output directory in the SDL project directory, and output the compiled files to this folder, as shown in the screenshot below:
7.2 Downloading the PPSSPP Source Code
The compilation of PPSSPP can refer to
https://github.com/hrydgard/ppsspp/wiki/Build-instructions
However, note that the example for Raspberry Pi 4 is not applicable to Jetson Nano. Jetson Nano needs to compile using normal Linux compilation commands.
Download the source code
git clone –recurse-submodules https://github.com/hrydgard/ppsspp.git
Enter the project directory:
cd ppsppp
However, after cd ppsspp, you will find that many files inside are empty. This is because the ppspp directory contains many submodules that come from other projects. You can use the following command to download the subprojects to the corresponding directories:
git submodule update –init –recursive
This step is crucial. The PPSSPP wiki mentions that you need to install a bunch of packages using the following command:
sudo apt install build-essential cmake libgl1-mesa-dev libsdl2-dev libvulkan-dev
As mentioned earlier, the SDL in the apt source has issues, and we have compiled it from source. Therefore, the actual command should be:
sudo apt install build-essential clang cmake libgl1-mesa-dev libvulkan-dev libzip-dev
7.3 Using the Clang Compiler
You may have noticed that in the last command of the previous section, I also installed libzip-dev, which is also a compilation dependency. I have already made a mistake once when compiling for a long time and suddenly reported an error saying zip was not found, which was quite frustrating. This time, I specifically added it to avoid everyone falling into the same pit.
Finally, compile PPSSPP with the command (note the two dashes):
./b.sh –release
If you did not configure Clang in the previous section, the above command will compile using GCC, which will be slow. In the previous section, I promised to teach you how to do it. In fact, the b.sh script includes a statement for temporarily modifying the compiler environment variable; you can add the clang parameter to the command as follows:
./b.sh –release –clang
This also demonstrates a correct way to use open-source projects. You need to recognize that the wiki is there to guide you, but a large project often has many people participating, and the writing of the wiki is often lagging behind. As an open-source project, all the code is provided to you, and you need to determine the usage method based on the specific code. Although it is often impossible to read all the code due to its size, it is necessary to read at least the outer layer of the encapsulation or the compilation script.
After compilation, the PPSSPPSDL file will be generated in the build directory, which is the executable file, as shown in the following image:
7.5 Testing PPSSPP
Under the Ubuntu system, double-click to run, and the PPSSPP interface is as follows. I have already loaded a game image, so there is a game icon in the game.
During the first startup, some settings need to be configured; otherwise, it will lag. You can choose Vulkan or OpenGL as the rendering engine, and I recommend using Vulkan for better performance.
Set the rendering resolution to three times that of the PSP. The larger the setting, the more detailed the picture, but if it is too large, it will lag. I found that three times is quite suitable, providing a clear picture without lag.
Do not set the texture filtering too high; 4x is sufficient. This does not greatly contribute to image quality, as in intense games, you focus on the characters and storyline. You often cannot tell the difference between 4X and 16X, but setting it too high wastes display resources.
For testing purposes, we display the resolution and running speed.
The screenshot below is a test of “God of War: Ghost of Sparta,” which can run at 60 frames per second. Considering that the image quality settings are also not low, this performance should stand out among various open-source game consoles on the market!
Alright, this concludes the review of the Jetson Nano development board. Seeed Studio is currently offering a special promotion on their Tmall store, along with a wealth of tutorials, so feel free to check it out!
Additionally, I must mention the newly released Jetson Xavier NX from NVIDIA. It can serve as a cloud-native server for AI edge applications, providing computational services for edge AI applications deployed locally. Don’t underestimate its appearance similar to Jetson Nano; its performance has greatly improved compared to previous generations; it features up to 21 TOPS of computing power.
According to Seeed Studio, the performance of Jetson Xavier NX is more than ten times that of Jetson TX2, while the power consumption is only 10 watts. Of course, in practical engineering applications, the biggest highlight of Jetson Xavier NX is its ability to run neural networks in parallel while processing high-resolution data from multiple sensors; this allows it to meet the needs of a complete AI system.
The significant performance increase of Jetson Xavier NX compared to Jetson Nano or Jetson TX2 is mainly due to its support for INT8 mode. As is well known, the AI computing performance of Jetson Nano is 0.5 TFOPS (FP16); Jetson TX2 is 1.3 TFOPS (FP16); while Jetson Xavier NX is 1.3 TFOPS (FP16) or 21 TOPS (INT8). The support for INT8 mode makes its performance significantly superior.
In terms of GPU, Jetson Xavier NX has made significant progress. Jetson Nano has 128 CUDA cores based on the NVIDIA Maxwell architecture, as mentioned in previous chapters; Jetson TX2 has 256 CUDA cores based on the NVIDIA Pascal architecture; and Jetson Xavier NX has 384 CUDA cores plus 48 Tensor cores based on the NVIDIA Volta architecture.
The Volta architecture is NVIDIA’s “new nuclear bomb,” with the first product being the Tesla V100 computing card. The Volta architecture introduces innovative Tensor cores, which are the biggest highlights. Now, with the same architecture, we have a smaller version of the “nuclear bomb,” which is the Jetson Xavier NX, offering powerful performance while being efficient and energy-saving.
The Jetson Xavier NX adopts a core board + carrier board design, with the hardware details of the core board as follows:
Seeed Studio’s review video is available for viewing.
Seeed Studio’s Tmall store currently has stock for sale, so feel free to check it out.