Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

NPU Kernel Driver

v6.4.15.9

Acuity Toolkit

6.21.1

ViviantelIDE

5.8.2

1. Model Conversation

Before the conversion, it is necessary to first set up the environment for model conversion. Please refer to the following document to prepare the environment:NN Model Conversion

1.1. Project Preparation

  1. Create Model folder

Create a folder yolov8s in path ~/c3v/Models. Please ensure the folder name is the same as the ONNX file name.

...

After completing the above steps, there will be the following files under the yolov8s path:

...

1.2. Implementing

Using shell script tools to convert the model from ONNX to the NB file. There are 4 steps: import quantize inference and export. Tools are in ~/c3v/Models:

  • pegasus_import.sh

  • pegasus_quantize.sh

  • pegasus_inference.sh

  • pegasus_export_ovx.sh

Import

Execute the command in the console or terminal, and wait for it to complete. It will import and translate an NN model to NN formats.

...

Then we will see the following four files added under the folder ~/c3v/Models/yolov8s.

...

Quantize

Modify the scale value(1/255=0.003921569) of the yolov8s_inputmeta.yml file, which is in ~/c3v/Models/yolov8s.

...

Then we will see the following four files added under the folder ~/c3v/Models/yolov8s.

...

Inference

Inference the NN model with the quantization data type.

...

Wait until the tool execution is complete and check there are no errors like this:

...

Export

Export the quantized application for device deployment. Please modify the pegasus_export_ovx.sh for the nb file generating, and add both 3 lines marked in the red box.

...

We can get the nb file and a c file for NN graph setup information.

...

2. Object Detection Program

2.1. Post Processing

The post-processing of the example code automatically transferred out by the tool will print the top 5. We need to increase the parsing of the results to obtain complete results of target recognition. The relevant post-processing functions are located in the file vnn_post_process.c.

...

For detailed function implementation, please refer to the following file:

View file
namevnn_post_process.zip

2.2. Program Compile

When compiling NN-related applications, SDK's headers and libraries must be included.

...

Code Block
BIN=yolov8s_sample 

NN_SDK_DIR=Path to NN SDK directory
TOOLCHAIN=Path to toolchain directory

NN_SDK_INC=$(NN_SDK_DIR)/include
NN_SDK_LIB=$(NN_SDK_DIR)/lib

# 1.cross compile
#CROSS_COMPILE=$(TOOLCHAIN)/aarch64-none-linux-gnu-
#CC=$(CROSS_COMPILE)gcc
#CXX=$(CROSS_COMPILE)g++

# 2.build in c3v
#CC=gcc
#CXX=g++

CFLAGS=-Wall -O3

INCLUDE += -I$(NN_SDK_INC) -I$(NN_SDK_INC)/HAL -I$(NN_SDK_INC)/ovxlib -I$(NN_SDK_INC)/jpeg
LIBS += -L$(NN_SDK_LIB) -L./ -L$(STD_LOG_INC)
LIBS += -lOpenVX -lOpenVXU -lOpenVX -lCLC -lVSC -lGAL -ljpeg -lovxlib -lm
LIBS += -lNNArchPerf -lArchModelSw
LIBS += -lstdc++ -ldl -lpthread -lgcc_s

CFLAGS += $(INCLUDE) -fPIC

SRCS=${wildcard *.c}
SRCS+=${wildcard *.cpp}

OBJS=$(addsuffix .o, $(basename $(SRCS)))

.SUFFIXES: .hpp .cpp .c 

.cpp.o:
	$(CXX) $(CFLAGS) -std=c++11 -c $<

.c.o:
	$(CC) $(CFLAGS) -c $<

all: $(BIN)

$(BIN): $(OBJS)
	$(CC) $(CFLAGS) $(LFLAGS) $(OBJS) -o $@ $(LIBS) 
	rm -rf *.o

clean:
	rm -rf *.o
	rm -rf $(BIN) $(LIB)
	rm -rf *~

3. Running on the C3V Linux

Insmod to kernel

Code Block
insmod ./galcore.ko
[14358.019373] galcore f8140000.galcore: NPU get power success
[14358.019458] galcore f8140000.galcore: galcore irq number is 44
[14358.020542] galcore f8140000.galcore: NPU clock: 900000000
[14358.026015] Galcore version 6.4.15.9.700103

...