Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 4 Next »

In this document, we will introduce how to convert the ONNX model into a model that can be used on the C3V NPU. The first part introduces the working environment needed for model conversion. The second part introduces the steps of conversion.

1. Environment

NN toolkit installation completed

NN IDE installation completed

Download acuity_examples_c901149.tgz for model conversation

The NPU kernel driver, NN Toolkit, and NN IDE need to match the version:

NPU Kernel Driver

NN Toolkit

NN IDE

v6.4.13.8

6.18.1

5.7.2

v6.4.15.9

6.21.1

5.8.2

1.1. Prepare NN Toolkit Environment

Please refer to this document: Acuity Toolkit Environment

Export environment variable.

export ACUITY_PATH=/home/users/data/share/c3v/acuity-toolkit-whl-6.18.1/bin

1.2. Install the IDE

Please refer to this document: VivanteIDE Install

1.3. Uncompress Examples

tar xvf acuity_examples_c901149.tgz

Suppose the fold is named c3v. The file path is as follows

image-20240202-032651.png

Create Script Soft Connection.

cd Model
source env.sh
image-20240202-063944.png

2. Transfer

2.1. Preparing

  1. Create Model folder

mkdir yolov8s
cd yolov8s
  1. Copy the ONNX file as the input. copy the input.jpg which resolution is 640x640. Please make sure the folder name is as same as the ONNX file name.

cp ../yolov8s.onnx .
cp ../input.jpg .
  1. Create a dataset.txt file, the content of dataset.txt is the input.jpg file name.

./input.jpg
  1. Create inputs_outputs.txt file and get the information from yolov8s.onnx via netron tool/webpage.

image-20240202-032930.png

write --input-size-list and --outputs informations to inputs_outputs.txt:

--inputs images --outputs 'onnx::Reshape_329 onnx::Reshape_344 onnx::Reshape_359' --input-size-list '3,640,640'

After completing the above steps, there will be the following files under yolov8s:

image-20240202-033618.png

2.2. Implementing

Import

This command will import and translate an NN model to ACUITY formats. Execute the command in the console or terminal, and wait for it to complete.

./pegasus_import.sh yolov8s

After the command execution is completed, you will see the following four files added under the folder.

image-20240202-033755.pngimage-20240202-064432.png

Quantize

Please modify the scale value(1/255) of yolov8s_inputmeta.yml.

image-20240202-064502.png

Please select one quantized type for your need, such as uint8 / int16 / bf16 / pcq .

./pegasus_quantize.sh yolov8s uint8
image-20240202-064521.png

After the command execution is completed, you will see the following two files added under the folder.

image-20240202-064535.png

Inference

Inference the NN model with the quantization data type.

./pegasus_inference.sh yolov8s uint8
image-20240202-064558.png

Export

Export the quantized application for device deployment. If you want to get the network binary, please modify the pegasus_export_ovx.sh for the nb file generating, and add both 3 lines marked in the red box.

image-20240202-064609.png
./pegasus_export_ovx.sh yolov8s uint8
image-20240202-064637.pngimage-20240202-064643.pngimage-20240202-064652.png

We can get the NB file and a c file for NN graph setup information.

image-20240202-064705.png

  • No labels