Skip to end of metadata
Go to start of metadata

You are viewing an old version of this content. View the current version.

Compare with Current View Version History

« Previous Version 4 Next »

This document will provide a detailed description of:

How to convert the YOLOV8 ONNX model into a model for use on the C3V platform

Write sample code for object pose based on YOLOV8

Execute object pose program and obtain recognition results in the C3V Linux environment

The tool versions involved in the current document are as follows:

NPU Kernel Driver

v6.4.15.9

v6.4.18.5

Acuity Toolkit

6.21.1

6.30.7

ViviantelIDE

5.8.2

5.10.1

1. Model Conversation

Before the conversion, it is necessary to first set up the environment for model conversion. Please refer to the following document to prepare the environment:NN Model Conversion

1.1. Project Preparation

  1. Create Model folder

Create a folder yolov8s-pose in path ~/c3v/Models. Please ensure the folder name is the same as the ONNX file name.

~/c3v/Models$ mkdir yolov8s-pose && cd yolov8s-pose
  1. Copy the ONNX file and input.jpg which resolution is 640x640 to the folder yolov8s. These two files will be used as input files during model conversion.

~/c3v/Models$ cp yolov8s-pose.onnx yolov8s-pose/
~/c3v/Models$ cp input.jpg yolov8s-pose/
  1. Create a dataset.txt file, the content of dataset.txt is the input.jpg file name.

./input.jpg
  1. Create inputs_outputs.txt file and get the information from yolov8s-pose .onnx via netron tool/webpage. Here is the onnx file: .

image-20241105-061416.png

Select the three operators within the red box as the output. write --input-size-list and --outputs informations to inputs_outputs.txt:

--outputs  '/model.22/Sigmoid_output_0 /model.22/Mul_2_output_0 /model.22/Sigmoid_1_output_0 /model.22/Mul_4_output_0'

After completing the above steps, there will be the following files under the yolov8s-pose path:

image-20241105-061818.png

1.2. Implementing

Using shell script tools to convert the model from ONNX to the NB file. There are 4 steps: import quantize inference and export. Tools are in ~/c3v/Models:

  • pegasus_import.sh

  • pegasus_quantize.sh

  • pegasus_inference.sh

  • pegasus_export_ovx.sh

Import

Execute the command in the console or terminal, and wait for it to complete. It will import and translate an NN model to NN formats.

./pegasus_import.sh yolov8s-pose

Wait until the tool execution is complete and check there are no errors like this:

image-20241105-061930.png

Then we will see the following four files added under the folder ~/c3v/Models/yolov8s.

image-20241105-062044.png

Quantize

Modify the scale value(1/255=0.003921569) of the yolov8s-pose_inputmeta.yml file, which is in ~/c3v/Models/yolov8s.

image-20240802-074702.png

Select one quantized type for your need, such as uint8 / int16 / bf16 / pcq. In this sample we use int16.

./pegasus_quantize.sh yolov8s-pose int16

Wait until the tool execution is complete and check there are no errors like this:

image-20241105-062224.png

Then we will see the following four files added under the folder ~/c3v/Models/yolov8s.

image-20241105-062330.png

Inference

Inference the NN model with the quantization data type.

./pegasus_inference.sh yolov8s-pose int16

Wait until the tool execution is complete and check there are no errors like this:

image-20241105-062433.png

Export

Export the quantized application for device deployment. Please modify the pegasus_export_ovx.sh for the nb file generating, and add both 3 lines marked in the red box.

image-20241105-062916.png
./pegasus_export_ovx.sh yolov8s-pose int16

Wait until the tool execution is complete and check there are no errors like this:

image-20240802-080358.png

In the path ~/c3v/Models/yolov8s-pose/wksp, you will find a folder named yolov8s-pose_uint16_nbg_unify.

image-20241105-063056.png

We can get the nb file and a c file for NN graph setup information.

image-20241105-063143.png

2. YoloV8 Pose Program

2.1. Post Processing

The post-processing of the example code automatically transferred out by the tool will print the top 5. We need to increase the parsing of the results to obtain complete results of target recognition. The relevant post-processing functions are located in the file vnn_post_process.c.

We provide an example function for post-processing, which can complete the parsing of NN processing results:

  • post_proc_init

  • post_proc_process

  • post_proc_deinit

The function needs to be modified:vnn_PostProcessYolov8sPoseInt16

vsi_status vnn_PostProcessYolov8sPoseInt16(vsi_nn_graph_t *graph)
{
    vsi_status status = VSI_FAILURE;

#if DETECT_RESULT_IMPL
    /*detect result sample implement*/
    post_proc_init(graph);
    post_proc_process(graph);
    post_proc_deinit();

#else
    /* Show the top5 result */
    status = show_top5(graph, vsi_nn_GetTensor(graph, graph->output.tensors[0]));
    TEST_CHECK_STATUS(status, final);

    /* Save all output tensor data to txt file */
    save_output_data(graph);

final:
#endif

    return VSI_SUCCESS;
}

For detailed function implementation, please refer to the following file:

we needs to be unzipped and placed in ~/c3v/Models/yolov8s-pose/wksp/yolov8s_int16_nbg_unify Folder.

2.2. Program Compile

When compiling NN-related applications, SDK's headers and libraries must be included.

  • Example of SDK Includes Path:

INCLUDES+=-I$(NN_SDK_DIR)/include/ \
-I$(NN_SDK_DIR)/include/CL \
-I$(NN_SDK_DIR)/include/VX \
-I$(NN_SDK_DIR)/include/ovxlib \
-I$(NN_SDK_DIR)/include/jpeg

Example of SDK Link Libraries:

LIBS+=-lOpenVX -lOpenVXU -lCLC -lVSC -lGAL -ljpeg -lovxlib

The following file is an example Makefile that needs to be unzipped and placed in ~/c3v/Models/yolov8s-pose/wksp/yolov8s_int16_nbg_unify Folder.

2.2.3 build in c3v

If you want to build the project in c3v directly, please modify these contents of Makefile:

BIN=yolov8s-detection-int16

# 2.build in c3v
NN_SDK_DIR=/usr

CC=gcc
CXX=g++

2.2.4 cross compile in Linux

If you want to build the project in host Linux, please modify these contents of Makefile:

BIN=yolov8s-detection-int16

# 1.cross compile
NN_SDK_DIR=Path to NN SDK directory
TOOLCHAIN=Path to toolchain directory

CROSS_COMPILE=$(TOOLCHAIN)/aarch64-none-linux-gnu-
CC=$(CROSS_COMPILE)gcc
CXX=$(CROSS_COMPILE)g++

you need to set the right path of NN_SDK_DIR and TOOLCHAIN

NN_SDK_DIR: The path to NPU SDK

TOOLCHAIN: The cross-compile toolchain path. which format may be like this:

TOOLCHAIN=/pub/toolchain/crossgcc/gcc-arm-9.2-2019.12-x86_64-aarch64-none-linux-gnu/bin

the brief folder of the project is like this:

image-20241106-122913.png

then using make to compile the project.

make

3. Running on the C3V Linux

Insmod to the kernel if the driver is not probed or skip this step.

insmod ./galcore.ko
[14358.019373] galcore f8140000.galcore: NPU get power success
[14358.019458] galcore f8140000.galcore: galcore irq number is 44
[14358.020542] galcore f8140000.galcore: NPU clock: 900000000
[14358.026015] Galcore version 6.4.18.5

Copy the application and related libraries into C3V Linux and run:

param1 is the nb file that converts from the acuity toolkit.

param2 is the image that is for detection.

./yolov8s-pose-int16 ./network_binary.nb ./input.jpg

The result is like this:

/mnt/yolov8s-pose_int16_nbg_unify # ./yolov8s-pose-int16 ./network_binary.nb
../input.jpg
Create Neural Network: 59ms or 59044us
Verify...
Verify Graph: 24ms or 24933us
Start run graph [1] times...
Run the 1 time: 122.44ms or 122443.93us
vxProcessGraph execution time:
Total   122.66ms or 122658.48us
Average 122.66ms or 122658.48us
obj: L: 0 P:0.93, [(0, 42) - (200, 599)]
obj: L: 0 P:0.91, [(309, 279) - (180, 361)]
obj: L: 0 P:0.58, [(344, 171) - (170, 301)]
  • No labels