Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

1. Prerequest

1.1. Provided Files

Unzip the provided zipped file. We use 'provided_files' to represent the folder where you unzip the file to. The folder structure is as follows:

image-20241120-074102.png

1.2. Conda Environment Setup

Please refer to the Setup_Miniconda3_Environment(C3V_Validation).pdf to set up the Conda environment named quantize_yolov8s_demo on Ubuntu PC and Python 3.8 on C3V.

1.3. Data Preparation(Ubuntu PC)

Download the COCO 2017 train and validation images and their annotations for evaluation. Unzip the downloaded files and organize them according to the folder structure shown below.

image-20241120-074438.png

Copy the prepare_inputs folder from the provided_files/python_scripts/ directory to your desired location, and then open a terminal within the prepare_inputs folder. In the terminal, activate the Conda environment quantize_yolov8s_demo, and run the following commands. In the terminal, activate the Conda environment quantize_yolov8s_demo, and run the following commands.

python prepare_quantization_dataset.py
python prepare_validation_dataset.py

Once it's finished, the folder should contain the following new items:

image-20241120-074625.png

Later in this documentation, all references to the prepare_inputs will refer to this one.

2. Model Evaluation (Ubuntu PC / Onnxruntime)

Copy these files:

  • all the files under provided_files/python_scripts/ort_pipeline/

  • provided_files/yolov8s_inspiren.onnx and move(or copy)

  • processed_images_for_validaton folder

  • processed_inputs_for_quantization folder

  • inputs_for_acuity_quantization.txt

  • processed_images_for_validation_meta_info.json

from prepare_inputs folder to an empty folder, open a terminal within the folder, activate the Conda environment quantize_yolov8s_demo, and run the following commands.

python extract_model.py
python quantize_model.py
python model_inference_onnxruntime.py
python process_outputs_to_coco_format_onnxruntime.py
python coco_eval_onnxruntime.py

The command 'python quantize_model.py' will generate the quantized_extracted_yolov8s_inspiren.onnx file, which will be used in the next section.

The results will be printed in the terminal as follows:

image-20241120-081737.png

3. Model Evaluation (Ubuntu PC / C3V)

3.1. Transition (Ubuntu PC / Acuity)

Set up the Conda environment named 3.8.10_python_acuity by referring to the Acuity package requirements in Section 4 of the C3V_AI_Platform_20240605.pdf. Copy the quantized_extracted_yolov8s_inspiren.onnx file generated in the previous section to an empty folder. Open a terminal within the folder, activate the Conda environment 3.8.10_python_acuity, and run the following commands.

  1. import

python {path_to_your_pegasus.py} import onnx --model="quantized_extracted_yolov8s_inspiren.onnx" --inputs="images" --input-size-list="3,640,640" --size-with-batch="False" --input-dtype-list="float" --outputs="onnx::Concat_455 onnx::Concat_456" --output-model="quantized_extracted_yolov8s_inspiren_onnx.json" --output-data="quantized_extracted_yolov8s_inspiren_onnx.data"
  1. generate inputmeta file

python {path_to_your_pegasus.py} generate inputmeta --model="quantized_extracted_yolov8s_inspiren_onnx.json" --input-meta-output="quantized_extracted_yolov8s_inspiren_onnx_inputmeta.yml"
  1. generate post-process file

python {path_to_your_pegasus.py} generate postprocess-file --model="quantized_extracted_yolov8s_inspiren_onnx.json" --postprocess-file-output="quantized_extracted_yolov8s_inspiren_onnx_postprocess_file.yml"
  1. Modify the generated inputmeta.yml file

quantized_extracted_yolov8s_inspiren_onnx_inputmeta.yml as follows:

  • Change the reverse_channel argument from true to false

  1. export

Run the following commands:

export IDE_PATH={path_to_your_ide}

python {path_to_your_pegasus.py} export ovxlib --model="quantized_extracted_yolov8s_inspiren_onnx.json" --model-data="quantized_extracted_yolov8s_inspiren_onnx.data" --with-input-meta="quantized_extracted_yolov8s_inspiren_onnx_inputmeta.yml" --postprocess-file="quantized_extracted_yolov8s_inspiren_onnx_postprocess_file.yml" --output-path="ovxlib_application/quantized_extracted_yolov8s_inspiren" --batch-size=1 --dtype="quantized" --model-quantize="quantized_extracted_yolov8s_inspiren_onnx.quantize" --viv-sdk="${IDE_PATH}/cmdtools/" --optimize="VIP9000NANODI_PLUS_PID0X1000000B" --pack-nbg-unify

The above commands can also be copied fromacuity_commands_for_yolov8s_inspiren.txt in the provided_files/ directory.

Once the export is finished, a folder should be created named ovxlib_application_nbg_unify. Under this folder:

  • Modify vnn_pre_process.c, in the function _load_input_meta()

input_meta_tab[0].image.scale[0] = 0.00392156862
input_meta_tab[0].image.scale[1] = 0.00392156862
input_meta_tab[0].image.scale[2] = 0.00392156862
  • Modify vnn_post_process.c according to modify_vnn_post_process_c .txt in the provided_files/ directory.

3.2. Inference by Python

3.2.1 Build the project

Import the modified ovxlib_application_nbg_unify folder into the Vivante IDE and build the project with the proper build configurations. Related information can be found in Section 5 of the C3V_AI_Platform_20240605.pdf.

Copy these to the C3V board

  • built project folder

  • processed_images_for_validation.zip (prepare_inputs/)

  • c3v_inference_yolov8s_inspiren.py (provided_files/python_scripts/c3v_pipeline/)

3.2.2. Inference (C3V Board)

Unzip processed_images_for_validation.zip to the built project folder and move c3v_inference_yolov8s_inspiren.py to the built project folder.

Open a terminal within the built project folder, activate the Conda environment 3.8_python, and run the following command.

python c3v_inference_yolov8s_inspiren.py

This command may take some time. Once complete, you should find two new files:

  • c3v_inputs_order.txt

  • Inference_Raw_Predictions.zip

Copy them back to the Ubuntu PC.

3.2.3. Evaluation (Ubuntu PC / C3V)

Place these files in an empty folder.

  • Inference_Raw_Predictions.zipc3v_inputs_order.txt

  • processed_images_for_validation_meta_info.json (prepare_inputs/)

  • process_outputs_to_coco_format.py(provided_files/python_scripts/c3v_pipeline/)

  • coco_eval.py (provided_files/python scripts/c3v_pipeline/)

Open a terminal within the folder, create a directory named inference_Raw_Predictions, then extract Inference_Raw_Predictions.zip to that directory. Activate conda environment quantize_yolov8s_demo, and run:

python process_outputs_to_coco_format.py
python coco_eval.py

The results will be printed in the terminal as follows:

image-20241120-083247.png

3.3. Inference by C code

3.3.1. Prepare the environment of pycocotools

sudo apt update
sudo apt install python3-pip
pip install pycocotools

3.3.2. Prepare the project

Download the mapTestTools, the project code tree like this:

image-20241120-090959.png
  1. Copy these three files to the vnn_nbg_model folder, the files were exported by the acuity toolkit in this step: https://sunplus.atlassian.net/wiki/spaces/C3/pages/2354020355/C3V+validation+Guilde#3.1.-Transition-(Ubuntu-PC-%2F-Acuity)

  • network_binary.nb

  • vnn_quantized_extracted_yolov8s_inspiren.c

  • vnn_quantized_extracted_yolov8s_inspiren.h

  1. Modify vnn_nbg_model/vnn_model.c like this.

#include "common.h"
#include "vnn_quantized_extracted_yolov8s_inspiren.h"

vsi_nn_graph_t *
vnn_CreateGraph(const char *data_file_name, vsi_nn_context_t in_ctx,
                                const vsi_nn_preprocess_map_element_t *pre_process_map,
                                uint32_t pre_process_map_count,
                                const vsi_nn_postprocess_map_element_t *post_process_map,
                                uint32_t post_process_map_count)
{
        return vnn_Createquantized_extracted_yolov8s_inspiren(data_file_name, in_ctx, pre_process_map, pre_process_map_count, post_process_map, post_process_map_count);
}

void vnn_ReleaseGraph(vsi_nn_graph_t *graph, vsi_bool release_ctx)
{
        vnn_Releasequantized_extracted_yolov8s_inspiren(graph, release_ctx);
}

3.3.3. Build and Evaluation on C3V

Copy the project files to the C3V, and build it on C3V Ubuntu with follow command.

make -j && make install

Do the map test using follow command

./mapTest.sh ./bin/network.nb ./cocoVal2017 ./c3v_detect_results.json

The results will be printed in the terminal as follows:

image-20241120-092154.png

  • No labels