Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

This is the document on how to test the YOLOv8s detection map with the COCO 2017. we We provide hybrid quantization to convert the model.

Table of Contents
stylenone

1. Prerequest

1.1. Provided Files

Download C3VHybrid_ValidationQuantization_Resources_v1.0.zip and unzip the provided zipped file. We use 'provided_files' to represent the folder where you unzip the file to. The folder structure is as follows:

...

1.2. Conda Environment Setup

Please refer to the Setup_Miniconda3_Environment(C3VHybrid_ValidationQuantization).pdf to set up the Conda environment named quantize_yolov8s_demo on Ubuntu PC and Python 3.8 on C3V.

1.3. Data Preparation(Ubuntu PC)

Download the COCO 2017 train and validation images and their annotations for evaluation. Unzip the downloaded files and organize them according to the folder structure shown below.

...

Code Block
python prepare_quantization_dataset.py
python prepare_validation_dataset_i8_bin.py

Once it's finished, the folder should contain the following new items:

...

Later in this documentation, all references to the prepare_inputs will refer to this one.

2. Model Transition (Ubuntu PC / Acuity)

Set up the Conda environment named 3.8.10_python_acuity by referring to the Acuity package requirements in Section 4 of the C3V_AI_Platform_20240605.pdf.

...

  • hy_layer.txt (provided_files/)

  • yolov8s_inspirendemo.onnx (provided_files/)

  • transform_yolov8s_inspirendemo_acuity_6.30.7.py (provided_files/python_scripts/)

  • processed_inputs_for_quantization folder (prepare_inputs/)

  • inputs_for_acuity_quantization.txt (prepare_inputs/)

...

Set the environment variables based on your own folder structure and placement. The version for acuity_examples is 5272e22.

...

Code Block
python transform_yolov8s_inspirendemo_acuity_6.30.7.py

Once the export is finished, a folder should be named ovxlib_application_nbg_unify. Under this folder:

...

Code Block
input_meta_tab[0].image.scale[0] = 0.00392156862
input_meta_tab[0].image.scale[1] = 0.00392156862
input_meta_tab[0].image.scale[2] = 0.00392156862
  • Modify vnn_post_process.c

Modify vnn_post_process.c according to modify_vnn_post_process_c .txt in the provided_files/ directory.

3. Model Inference (C3V Board)

3.2. Inference by Python

3.2.1 Build the project (Ubuntu PC)

Then import the modified ovxlib_application_nbg_unify folder into the Vivante IDE and build the project with the proper build configurations. Related information can be found in Section 5 6 of the C3V_AI_Platform_20240605.pdf

...

  • built project folder

  • processed_i8_bin_for_validation.zip (prepare_inputs/)

  • c3v_inference_yolov8s_inspirendemo.py (provided_files/python_scripts/c3v_pipeline/)

3.2.2. Inference (C3V Board)

Unzip processed_i8_bin_for_validation.zip to the built project folder and move c3v_inference_yolov8s_inspirendemo.py to the built project folder.

...

Code Block
chmod +x ./Debug/yolov8sinspirenyolov8sdemo

Then activate the Conda environment 3.8_python, and run the following command.

Code Block
python c3v_inference_yolov8s_inspirendemo.py

This command may take some time. Once complete, you should find two new files:

...

Copy them back to the Ubuntu PC.

3.2.3. Evaluation (Ubuntu PC / C3V)

Place these files in an empty folder.

...

The results will be printed in the terminal as follows:

...

3.3. Inference by C code

3.3.1. Prepare the environment (C3V Board)

Code Block
sudo apt update
sudo apt install python3-pip libopencv-dev libjsoncpp-dev
pip install pycocotools

3.3.2. Prepare the project (Ubuntu PC/C3V)

Download the mapTestTools_v1.1.zip and unzip the provided zipped file. The project code tree is like this:

...

  1. Copy these three files to the framework/model folder, the files were exported by the acuity toolkit in this step: https://sunplus.atlassian.net/wiki/spaces/C3/pages/2354020355/C3V+validationValidation+Guilde#3Guide#3.1.-Transition-(Ubuntu-PC-%2F-Acuity)

  • network_binary.nb

  • vnn_yolov8sinspirenyolov8sdemo.c

  • vnn_yolov8sinspirenyolov8sdemo.h

  1. Modify model/vnn_model.c like this.

Code Block
vsi_nn_graph_t *
vnn_CreateGraph(const char *data_file_name, vsi_nn_context_t in_ctx,
				const vsi_nn_preprocess_map_element_t *pre_process_map,
				uint32_t pre_process_map_count,
				const vsi_nn_postprocess_map_element_t *post_process_map,
				uint32_t post_process_map_count)
{
	return vnn_CreateYolov8sInspirenCreateYolov8sDemo(data_file_name, in_ctx, pre_process_map, pre_process_map_count, post_process_map,
									post_process_map_count);
} /* vnn_CreateGraph() */

void vnn_ReleaseGraph(vsi_nn_graph_t *graph, vsi_bool release_ctx)
{
	vnn_ReleaseYolov8sInspirenReleaseYolov8sDemo(graph, release_ctx);
} /* vnn_ReleaseGraph() */

3.3.3. Build and Evaluation (C3V Board)

Copy the project files to the C3V, copy the val2017 dataset to the C3V, and build it on C3V Ubuntu with follow command.

...

Code Block
./mapTest.sh ./bin/network_binary.nb ~/dataset/val2017/ ./c3v_detect_val2017.json

The results will be printed in the terminal as follows:

...