In the previous introduction, we discussed how to develop AI inference applications based on Orange Pi AIpro, and learned that before inference, the original network model (which could be PyTorch/TensorFlow/Caffe, etc.) needs to be converted into an .om model. Only then can we call the Ascend aclmdlExecute and other model execution interfaces for model inference on the Orange Pi AIpro. This model conversion process requires the use of the ATC tool.
Currently, the ATC tool directly supports the conversion from Caffe, ONNX, TensorFlow, and MindSpore models. Therefore, if your training framework is PyTorch, you need to perform the torch.onnx.export operation to export it to an ONNX model before using the ATC tool.
Introduction to the ATC Tool
Basic Usage of the ATC Tool
atc --framework=0 --soc_version=${soc_version} --model=$HOME/mod/resnet50.prototxt --weight=$HOME/mod/resnet50.caffemodel --output=$HOME/mod/out/caffe_resnet50
-
–framework: The type of the original network model framework, 0 indicates the Caffe framework.
-
–soc_version: Specify the version of the Ascend AI processor during model conversion. You can execute the npu-smi info command to query it. Add the Ascend information before the queried “Name”, for example, if the corresponding value of “Name” is xxxyy. -
–model: The path of the original network model file, including the file name. -
–weight: The path of the original network model weight file, including the file name, only needed when the original network model is Caffe. -
–output: The path of the converted *.om model file, including the file name. After successful conversion, the model file name will automatically end with the .om suffix.
3. If the ATC run success message is displayed, it indicates that the model conversion was successful.
You can view the converted model file at the path specified by the –output parameter, for example, caffe_resnet50.om.
Advanced Usage of the ATC Tool
1. Convert the original model file or Ascend *.om model file to json format
atc --mode=1 --framework=0 --om=$HOME/mod/resnet50.prototxt --json=$HOME/mod/out/caffe_resnet50.json
atc --mode=1 --om=$HOME/mod/out/caffe_resnet50.om --json=$HOME/mod/out/caffe_resnet50.json
atc --framework=0 --soc_version=${soc_version} --model=$HOME/mod/resnet50.prototxt --weight=$HOME/mod/resnet50.caffemodel --output=$HOME/mod/out/caffe_resnet50 --input_fp16_nodes="data" --out_nodes="pool1:0" --output_type="pool1:0:FP16"
3. Set dynamic BatchSize/dynamic resolution
atc --framework=0 --soc_version=${soc_version} --model=$HOME/mod/resnet50.prototxt --weight=$HOME/mod/resnet50.caffemodel --output=$HOME/mod/out/caffe_resnet50 --input_shape="data:-1,3,224,224" --dynamic_batch_size="1,2,4,8"
atc --framework=0 --soc_version=${soc_version} --model=$HOME/mod/resnet50.prototxt --weight=$HOME/mod/resnet50.caffemodel --output=$HOME/mod/out/caffe_resnet50 --input_shape="data:1,3,-1,-1" --dynamic_image_size="224,224;448,448"
Previous Recommendations
1. Beginner’s Guide | Teach you how to quickly get started with the Orange Pi AIpro development board
2. A complete sample collection of peripheral interfaces for the Orange Pi AIpro (with source code)
3. How to develop AI inference applications based on Orange Pi AIpro
4. How to upgrade the CANN software package on the Orange Pi AIpro development board