This is the document on how to test the YOLOv8s detection map with the COCO 2017. We provide hybrid quantization to convert the model.
Table of Contents | ||
---|---|---|
|
1. Prerequest
1.1. Provided Files
Download C3VHybrid_ValidationQuantization_Resources_v1.0.zip and unzip the provided zipped file. We use 'provided_files' to represent the folder where you unzip the file to. The folder structure is as follows:
...
1.2. Conda Environment Setup
Please refer to the Setup_Miniconda3_Environment(C3VHybrid_ValidationQuantization).pdf to set up the Conda environment named quantize_yolov8s_demo on Ubuntu PC and Python 3.8 on C3V.
1.3. Data Preparation(Ubuntu PC)
Download the COCO 2017 train and validation images and their annotations for evaluation. Unzip the downloaded files and organize them according to the folder structure shown below.
...
Copy the prepare_inputs folder from the provided_files/python_scripts/ directory to your desired location, and then . Then open a terminal within the prepare_inputs folder. In the terminal, activate the Conda environment quantize_yolov8s_demo, and run the following commands. In the terminal, activate the Conda environment quantize_yolov8s_demo3.8_python, and run the following commands.
Code Block |
---|
python prepare_quantization_dataset.py
python prepare_validation_dataset_i8_bin.py |
Once it's finished, the folder should contain the following new items:
...
Later in this documentation, all references to the prepare_inputs will refer to this one.
2. Model
...
Transition (Ubuntu PC /
...
Acuity)
Copy these files:
...
all the files under provided_files/python_scripts/ort_pipeline/
Set up the Conda environment named 3.8.10_python_acuity by referring to the Acuity package requirements in Section 4 of the C3V_AI_Platform_20240605.pdf.
Place
hy_layer.txt (provided_files/)
yolov8s_inspirendemo.onnx and move(or copy)processed_images_for_validaton folder(provided_files/)
transform_yolov8s_demo_acuity_6.30.7.py (provided_files/python_scripts/)
processed_inputs_for_quantization folder (prepare_inputs/)
inputs_for_acuity_quantization.txt
processed_images_for_validation_meta_info.json
from prepare_inputs folder to an empty folder, open a terminal within the folder, activate the Conda environment quantize_yolov8s_demo, and run the following commands.
Code Block | ||
---|---|---|
| ||
python extract_model.py
python quantize_model.py
python model_inference_onnxruntime.py
python process_outputs_to_coco_format_onnxruntime.py
python coco_eval_onnxruntime.py |
The command 'python quantize_model.py' will generate the quantized_extracted_yolov8s_inspiren.onnx file, which will be used in the next section.
The results will be printed in the terminal as follows:
...
3. Model Evaluation (Ubuntu PC / C3V)
3.1. Transition (Ubuntu PC / Acuity)
...
(prepare_inputs/)
in an empty folder. Open a terminal within the folder, activate the Conda environment 3.8.10_python_acuity, and run the following commands.
import
Code Block |
---|
python {path_to_your_pegasus.py} import onnx --model="quantized_extracted_yolov8s_inspiren.onnx" --inputs="images" --input-size-list="3,640,640" --size-with-batch="False" --input-dtype-list="float" --outputs="onnx::Concat_455 onnx::Concat_456" --output-model="quantized_extracted_yolov8s_inspiren_onnx.json" --output-data="quantized_extracted_yolov8s_inspiren_onnx.data" |
generate inputmeta file
Code Block |
---|
python {path_to_your_pegasus.py} generate inputmeta --model="quantized_extracted_yolov8s_inspiren_onnx.json" --input-meta-output="quantized_extracted_yolov8s_inspiren_onnx_inputmeta.yml" |
generate post-process file
Code Block |
---|
python {path_to_your_pegasus.py} generate postprocess-file --model="quantized_extracted_yolov8s_inspiren_onnx.json" --postprocess-file-output="quantized_extracted_yolov8s_inspiren_onnx_postprocess_file.yml" |
Modify the generated inputmeta.yml file
quantized_extracted_yolov8s_inspiren_onnx_inputmeta.yml as follows:
Change the reverse_channel argument from true to false
export
Run the following commands:
Code Block |
---|
export IDE_PATH={path_to_your_ide}
python {path_to_your_pegasus.py} export ovxlib --model="quantized_extracted_yolov8s_inspiren_onnx.json" --model-data="quantized_extracted_yolov8s_inspiren_onnx.data" --with-input-meta="quantized_extracted_yolov8s_inspiren_onnx_inputmeta.yml" --postprocess-file="quantized_extracted_yolov8s_inspiren_onnx_postprocess_file.yml" --output-path="ovxlib_application/quantized_extracted_yolov8s_inspiren" --batch-size=1 --dtype="quantized" --model-quantize="quantized_extracted_yolov8s_inspiren_onnx.quantize" --viv-sdk="${IDE_PATH}/cmdtools/" --optimize="VIP9000NANODI_PLUS_PID0X1000000B" --pack-nbg-unify |
The above commands can also be copied fromacuity_commands_for_yolov8s_inspiren.txt in the provided_files/ directory.then:
Code Block | ||
---|---|---|
| ||
- set environment variable (example)
export
ACUITY_EXAMPLE_PATH=~/Verisilicon/VerisiliconVIP9000ToolRelease_20240522/Extracted/Acuity_Toolkit_Whl_6.30.7_20240521/acuity_examples
export ACUITY_PATH=~/Verisilicon/ACUITY/acuity-toolkit-whl-6.30.7-cp38/bin
export IDE_PATH=~/Verisilicon/IDE/VivanteIDE5.10.1 |
Set the environment variables based on your folder structure and placement. The version for acuity_examples is 5272e22.
Import, quantize, and export model by:
Code Block |
---|
python transform_yolov8s_demo_acuity_6.30.7.py |
Once the export is finished, a folder should be created named ovxlib_application_nbg_unify. Under this folder:
Modify vnn_pre_process.c , in
In the function _load_input_meta()
Code Block |
---|
input_meta_tab[0].image.scale[0] = 0.00392156862 input_meta_tab[0].image.scale[1] = 0.00392156862 input_meta_tab[0].image.scale[2] = 0.00392156862 |
Modify vnn_post_process.c
Modify vnn_post_process.c according to modify_vnn_post_process_c .txt in the provided_files/ directory.
3. Model Inference (C3V Board)
3.2. Inference by Python
3.2.1 Build the project
...
(Ubuntu PC)
Then import the modified ovxlib_application_nbg_unify folder into the Vivante IDE and build the project with the proper build configurations. Related information can be found in Section 5 6 of the C3V_AI_Platform_20240605.pdf.
Copy these to the C3V board.
built project folder
processed_i8_imagesbin_for_validation.zip (prepare_inputs/)
c3v_inference_yolov8s_inspirendemo.py (provided_files/python_scripts/c3v_pipeline/)
3.2.2. Inference (C3V Board)
Unzip processed_imagesi8_bin_for_validation.zip to the built project folder and move c3v_inference_yolov8s_inspirendemo.py to the built project folder.
Open a terminal within the built project folder, give the execute permission to the executable by:
Code Block |
---|
chmod +x ./Debug/yolov8sdemo |
Then activate the Conda environment 3.8_python, and run the following command.
Code Block |
---|
python c3v_inference_yolov8s_inspirendemo.py |
This command may take some time. Once complete, you should find two new files:
...
Copy them back to the Ubuntu PC.
3.2.3. Evaluation (Ubuntu PC / C3V)
Place these files in an empty folder.
Inference_Raw_Predictions.zipc3vzip
c3v_inputs_order.txt
processed_imagesi8_bin_for_validation_meta_info.json (prepare_inputs/)
process_outputs_to_coco_format.py (provided_files/python_scripts/c3v_pipeline/)
coco_eval.py (provided_files/python scripts/c3v_pipeline/)
Open a terminal within the folder, create a directory named inference_Raw_Predictions, then extract Inference_Raw_Predictions.zip to that directory. Activate conda environment quantize_yolov8s_demoactivate conda environment 3.8_python, and run:
Code Block |
---|
python process_outputs_to_coco_format.py python coco_eval.py |
The results will be printed in the terminal as follows:
...
3.3. Inference by C code
3.3.1. Prepare the environment (C3V Board)
Code Block |
---|
sudo apt update sudo apt install python3-pip libopencv-dev libjsoncpp-dev pip install pycocotools |
3.3.2. Prepare the project (Ubuntu PC/C3V)
Download the mapTestTools_v1.1.zip and unzip the provided zipped file. The project code tree is like this:
...
Copy these three files to the framework/model folder, the files were exported by the acuity toolkit in this step: https://sunplus.atlassian.net/wiki/spaces/C3/pages/2354020355/C3V+validationValidation+Guilde#3Guide#3.1.-Transition-(Ubuntu-PC-%2F-Acuity)
network_binary.nb
vnn_quantized_extracted_yolov8s_inspirenyolov8sdemo.c
vnn_quantized_extracted_yolov8s_inspirenyolov8sdemo.h
Modify
model
/vnn_model.c
like this.
Code Block |
---|
#include "common.h" #include "vnn_quantized_extracted_yolov8s_inspiren.h" vsi_nn_graph_t * vnn_CreateGraph(const char *data_file_name, vsi_nn_context_t in_ctx, const vsi_nn_preprocess_map_element_t *pre_process_map, uint32_t pre_process_map_count, const vsi_nn_postprocess_map_element_t *post_process_map, uint32_t post_process_map_count) { return vnn_Createquantized_extracted_yolov8s_inspirenCreateYolov8sDemo(data_file_name, in_ctx, pre_process_map, pre_process_map_count, post_process_map, post_process_map_count); } /* vnn_CreateGraph() */ void vnn_ReleaseGraph(vsi_nn_graph_t *graph, vsi_bool release_ctx) { vnn_Releasequantized_extracted_yolov8s_inspiren vnn_ReleaseYolov8sDemo(graph, release_ctx); } /* vnn_ReleaseGraph() */ |
3.3.3. Build and Evaluation (C3V Board)
Copy the project files to the C3V, copy the val2017 dataset to the C3V, and build it on C3V Ubuntu with follow command.
...
Code Block |
---|
./mapTest.sh ./bin/network_binary.nb .~/dataset/val2017/cocoVal2017 ./c3v_detect_resultsval2017.json |
The results will be printed in the terminal as follows:
...