/
mAP test SOP with SNNF

mAP test SOP with SNNF

This document is an example of using YOLOV8s object detection to test model mean average precision(mAP) on C3V.

1. Prepare the environment (on Ubuntu)

sudo apt update sudo apt install python3-pip libopencv-dev libjsoncpp-dev pip install pycocotools

2. Cross-compile the sample (on Ubuntu)

First, you need to use the snnf_build.sh script to compile the SNNF-related library files, and then execute the following command:

make -C unittest/mapTest/ && make install

After the command execution is completed, the following log can be seen:

--------------------------------------------------------------------------------- --------------------------------------------------------------------------------- total 444 -rwxr-xr-x 1 c3v c3v 118528 Jan 8 17:10 snnf_map_demo -rwxr-xr-x 1 c3v c3v 332096 Jan 8 17:10 snnf_nnsample --------------------------------------------------------------------------------- install --------------------------------------------------------------------------------- total 36 drwxr-xr-x 2 c3v c3v 4096 Jan 8 17:10 bin drwxr-xr-x 6 c3v c3v 4096 Jan 8 17:10 include drwxr-xr-x 3 c3v c3v 4096 Jan 8 16:54 lib drwxr-xr-x 7 c3v c3v 4096 Jan 8 17:10 resource drwxr-xr-x 4 c3v c3v 4096 Jan 8 17:10 samples -rwxr-xr-x 1 c3v c3v 473 Jan 8 17:10 snnf_build_samples.sh -rwxr-xr-x 1 c3v c3v 1618 Jan 8 17:10 snnf_env.sh -rwxr-xr-x 1 c3v c3v 430 Jan 8 17:10 snnf_run.sh drwxr-xr-x 9 c3v c3v 4096 Jan 8 17:10 thirdparty ---------------------------------------------------------------------------------

the snnf_map_demo app was built completely, which is in the following path: release/bin/

The release code tree is like this:

(base) c3v@c3v:~/code/snnf/release$ tree -L 2 . ├── bin │ ├── snnf_map_demo │ └── snnf_nnsample ├── include │ ├── NNLogger.h │ ├── NNModel.h | ├── ... │ └── type ├── lib │ ├── libnnRoutines.a │ ├── ... │ └── plugin ├── resource │ ├── config │ ├── ... │ └── video ├── samples │ ├── CMakeLists.txt │ ├── ... │ └── unittest ├── snnf_build_samples.sh ├── snnf_env.sh ├── snnf_run.sh └── thirdparty ├── ffmpeg ├── ... └── pytorch

3. Inference(on C3V)

Please prepare the coco dataset: 2017 Val images [5K/1GB] https://cocodataset.org/#download

Copy the whole release folder to the C3V, and copy the val2017 dataset to the C3V. Do the map test using the following command:

. ./snnf_env.sh ./bin/snnf_map_demo ~/c3v/dataset/val2017 ./detection_results_coco_val2017.json

snnf_map_demo needs to set two parameters.

  • param1: the dataset path of coco val2017.

  • param2: result file name, the detects results by yolov8s object detection.

After the command execution is completed, the following log can be seen:

sunplus@ubuntu:~/c3v/app/snnf_release$ ./bin/snnf_map_demo ~/c3v/dataset/val2017 ./detection_results_coco_val2017.json 1736328302307|7f7baa3010|T|common: [app]dataset path:/home/sunplus/c3v/dataset/val2017 save file name:./detection_results_coco_val2017.json 2025-01-08 17:25:02 2025-01-08 17:31:20####################################################][5000 of 5000 | 100.00%][Avg: 75.53ms FPS: 13.24] Time cost: 377.88s, Average FPS: 13.23

4. Evaluation(on Ubuntu)

Download the file and unzip it to the Ubuntu. The coco_eval.zip has two files:

  • coco_eval.py

  • instances_val2017.json

Copy the detection_results_coco_val2017.json to Ubuntu.

Use coco_eval.py to evaluate:

./coco_eval.py ./detection_results_coco_val2017.json

The results are like this:

~$python3 coco_eval.py ./detection_results_coco_val2017.json loading annotations into memory... Done (t=0.48s) creating index... index created! Loading and preparing results... DONE (t=3.46s) creating index... index created! Running per image evaluation... Evaluate annotation type *bbox* DONE (t=44.35s). Accumulating evaluation results... DONE (t=10.41s). Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.439 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.605 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.476 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.246 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.487 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.600 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.350 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.580 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.633 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.429 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.697 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.786

Related content