V1.0.0 of SMZF
This is the formal release V1.0.0 of Sunplus Model Zoo Framework.
Target of the V1.0.0
This is the first version of Sunplus Model Zoo Framework.
SMZF V1.0.0 is a split from Model-Zoo V1.0.0, mainly responsible for the application logic part of model validation, and is independent of specific models.
Running smzf alone is meaningless. SMZF V1.0.0 needs to be used in conjunction with Model-Zoo V1.0.1, which has the same functional form as Model-Zoo V1.0.0. The types and quantities of models are already equivalent to SNNF V1.2.0.
Resource
Please get the V1.0.0 release resource here, tag:V1.0.0.
Usage of V1.0.0
How to verify Official Demos
You can use the bin we provide to start as follows:
root@ubuntu:/smzf/release# ./bin/nnModel
Usage: ./bin/nnModel [-m|-a|-h] [-i|-v|-o|-p|option]
Version: 1.0.0
Time:
[-m,--model <model>] run a single model
<model>:AgeU8 Det10gI16 GenderAgeU8
HumanAttrHybridU8 HumanAttrI16 LightFaceHybridU8
LightFaceI16 OcrClsI16 OcrDetI16
OcrRecI16 ReidI16 RtmdetsI16
StgcnI16 VehicleAttrHybridU8 VehicleAttrI16
W600kR50I16 Yolo11sClassifyHybridI8 Yolo11sDetectionHybridI8
Yolo11sObbHybridI8 Yolo11sPoseHybridI8 Yolo11sSegmentationHybridI8
Yolov10sDetectionHybridI8 Yolov5sDetectionU8 Yolov8nCcpdDetectionOptiU8
Yolov8nClassifyHybridI8 Yolov8nDetectionOptiI16 Yolov8nObbOptiI16
Yolov8nPoseOptiI16 Yolov8nSegmentationOptiI16 Yolov8sClassifyHybridI8
Yolov8sDetectionHybridI8 Yolov8sDetectionI16 Yolov8sDetectionOptiI16
Yolov8sObbHybridI8 Yolov8sPoseI16 Yolov8sSegmentationHybridI8
Yolov8sSegmentationI16
example:./bin/nnModel -m Yolov8sDetectionI16
./bin/nnModel --model HumanAttrI16
[-i,--image file] set image file to nn detection.
<file>: file name
[-c | option]: test count, this parameter is only match with -i
example:./bin/nnModel -m Yolo11sDetectionHybridI8 -i resource/image/person.jpg -c 2
./bin/nnModel -m Yolo11sDetectionHybridI8 --image resource/image/person.jpg -c 2
[-v,--video file] set video file to nn detection.
<file>: file name
example:./bin/nnModel -m Yolo11sDetectionHybridI8 -v resource/video/humanTracking.mp4
./bin/nnModel -m Yolo11sDetectionHybridI8 --video resource/video/humanTracking.mp4
[-o,--output file] specify the output file name for saving results.
<file>: file name with extension (e.g., output.jpg, output.json, output.mp4)
This parameter must be used in conjunction with imageWriter, jsonWriter, or videoWriter.
example:./bin/nnModel -m Yolo11sDetectionHybridI8 -i resource/image/person.jpg -o output.jpg
./bin/nnModel -m Yolo11sDetectionHybridI8 --image resource/image/person.jpg -o output.json
./bin/nnModel -m Yolo11sDetectionHybridI8 -v resource/video/humanTracking.mp4 -o output.mp4
[-a,--all] run all model testing
[-p,--performance] enable performance test
Release folder structure
apps:Model-Zoo's applications such as demo and mAP sample, also bare metal mode named bmi.
build_apps.sh. The shell to build apps just with the resources in release folder.
bin: Some prebuild application.
nnMap. Prebuild mAP programs that can run on the c3v Linux platform. It can output the related detect results as json format, then you can get the json file to generate mAP value with COCO reference on PC. Customers can follow the sample code here to improve similar features they want.
mapTest.sh. A shell for the sequential mAP operations. Firstly, get the detect json results, and then caculate the mAP value on C3V.
nnModel. Prebuild demo programs that can run on the c3v Linux platform. Just a demo showcasing the functionality of Model-Zoo.
cmake:The cmake config both for build on C3V and cross compiling.
env.sh: executable script for compiling environment.
include:header file of Model-Zoo SDK.
lib:libraries of Model-Zoo SDK.
nnthirdparty: just as its name implies.
python_res:The python interface related libs and resources. Please refer to the usage in apps folder.
resource
config: some config files for features.
font: ttf file for plotting sample.
image: image files used for test.
misc: Some other assit resources.
models: models to be used in the sample program, which are refer to Model-Zoo.
video: video files used for test.
How to run SMZF demo
Copy the release foler to C3V Linux.
/smzf/release # ls -alh
total 44K
drwxrwxr-x 10 y.bin y.bin 4.0K 4月 25 15:41 .
drwxrwxr-x 11 y.bin y.bin 4.0K 4月 25 15:40 ..
drwxr-xr-x 7 y.bin y.bin 4.0K 4月 25 15:40 apps
drwxrwxr-x 3 y.bin y.bin 4.0K 4月 25 15:41 bin
drwxr-xr-x 2 y.bin y.bin 4.0K 4月 25 15:40 cmake
-rw-r--r-- 1 y.bin y.bin 532 4月 24 17:14 env.sh
drwxr-xr-x 7 y.bin y.bin 4.0K 4月 25 15:40 include
drwxrwxr-x 2 y.bin y.bin 4.0K 4月 25 15:40 lib
drwxrwxr-x 11 y.bin y.bin 4.0K 4月 25 15:40 nnthirdparty
drwxrwxr-x 5 y.bin y.bin 4.0K 4月 25 15:41 python_res
drwxr-xr-x 8 y.bin y.bin 4.0K 4月 25 15:40 resource
Export environment variables.
/smzf/release # source env.sh
Run ./bin/nnModel for Model-Zoo demo.
a. One-time input
./bin/nnModel -m Yolo11sDetectionHybridI8
# ./bin/nnModel -m Yolo11sDetectionHybridI8
1745569567470|7f9a721040|I|common: [nn]Yolo11sDetectionHybridI8 in
1745569568302|7f83565740|I|common: [nn]GeneralModelOutputListener detect from resource/image/person.jpg, the result: (box: 614.07 150.31 273.08 642.31) --> label: 0(person), confidence: 0.95, fin: false
1745569568302|7f83565740|I|common: [nn]GeneralModelOutputListener detect from resource/image/person.jpg, the result: (box: 315.09 175.75 208.19 619.44) --> label: 0(person), confidence: 0.94, fin: false
1745569568302|7f83565740|I|common: [nn]GeneralModelOutputListener detect from resource/image/person.jpg, the result: (box: 450.46 288.01 65.35 176.45) --> label: 26(handbag), confidence: 0.67, fin: true
1745569568318|7f9a721040|I|common: [nn]Yolo11sDetectionHybridI8 out, retVal: -0x0
b. Read input from the image file
./bin/nnModel -m Yolo11sDetectionHybridI8 -i resource/image/person640x640.jpg
# ./bin/nnModel -m Yolo11sDetectionHybridI8 -i resource/image/person640x640.jpg
1745569591433|7fa6acb040|I|common: [nn]Yolo11sDetectionHybridI8 in
1745569591716|7f8f975740|I|common: [nn]GeneralModelOutputListener detect from resource/image/person640x640.jpg, the result: (box: 0.00 14.00 626.00 622.00) --> label: 0(person), confidence: 0.87, fin: true
1745569591728|7fa6acb040|I|common: [nn]Yolo11sDetectionHybridI8 out, retVal: -0x0
c. Read inputs from the video file.
./bin/nnModel -m Yolo11sDetectionHybridI8 -v resource/video/humanCount.mp4
# ./bin/nnModel -m Yolo11sDetectionHybridI8 -v resource/video/humanCount.mp4
1745569630547|7f8b17b040|I|common: [nn]Yolo11sDetectionHybridI8 in args:4
1745569631022|7f717d5740|T|common: [nn]streaming test: runner func in
1745569631022|7f8b17b040|T|common: [nn]Press: 'q' or 'Q' to quit the test
1745569631256|7f71fe5740|I|common: [nn]GeneralModelOutputListener detect from , the result: (box: 675.38 131.25 302.25 870.00) --> label: 0(person), confidence: 0.93, fin: false
1745569631256|7f71fe5740|I|common: [nn]GeneralModelOutputListener detect from , the result: (box: 1584.00 427.88 63.00 161.25) --> label: 0(person), confidence: 0.32, fin: true
1745569631340|7f71fe5740|I|common: [nn]GeneralModelOutputListener detect from , the result: (box: 676.50 128.62 310.50 873.75) --> label: 0(person), confidence: 0.93, fin: false
1745569631340|7f71fe5740|I|common: [nn]GeneralModelOutputListener detect from , the result: (box: 1583.25 427.50 61.50 162.00) --> label: 0(person), confidence: 0.34, fin: true
1745569631432|7f71fe5740|I|common: [nn]GeneralModelOutputListener detect from , the result: (box: 678.00 127.50 307.50 876.00) --> label: 0(person), confidence: 0.93, fin: false
1745569631432|7f71fe5740|I|common: [nn]GeneralModelOutputListener detect from , the result: (box: 1584.75 430.12 58.50 159.75) --> label: 0(person), confidence: 0.27, fin: true
1745569631522|7f71fe5740|I|common: [nn]GeneralModelOutputListener detect from , the result: (box: 679.12 128.62 306.75 876.75) --> label: 0(person), confidence: 0.93, fin: false
1745569631523|7f71fe5740|I|common: [nn]GeneralModelOutputListener detect from , the result: (box: 1583.25 427.88 61.50 161.25) --> label: 0(person), confidence: 0.27, fin: true
1745569631614|7f71fe5740|I|common: [nn]GeneralModelOutputListener detect from , the result: (box: 683.25 126.00 307.50 876.00) --> label: 0(person), confidence: 0.94, fin: true
1745569631711|7f71fe5740|I|common: [nn]GeneralModelOutputListener detect from , the result: (box: 685.50 127.12 306.00 876.75) --> label: 0(person), confidence: 0.93, fin: true
1745569631798|7f71fe5740|I|common: [nn]GeneralModelOutputListener detect from , the result: (box: 686.25 127.88 306.00 878.25) --> label: 0(person), confidence: 0.93, fin: true
1745569631889|7f71fe5740|I|common: [nn]GeneralModelOutputListener detect from , the result: (box: 686.62 130.88 309.75 872.25) --> label: 0(person), confidence: 0.93, fin: true
1745569631979|7f71fe5740|I|common: [nn]GeneralModelOutputListener detect from , the result: (box: 686.25 132.38 309.00 869.25) --> label: 0(person), confidence: 0.92, fin: true
1745569632072|7f71fe5740|I|common: [nn]GeneralModelOutputListener detect from , the result: (box: 687.00 133.50 309.00 867.00) --> label: 0(person), confidence: 0.92, fin: false
1745569632072|7f71fe5740|I|common: [nn]GeneralModelOutputListener detect from , the result: (box: 1584.75 430.12 58.50 159.75) --> label: 0(person), confidence: 0.25, fin: true
......
1745569678277|7f717d5740|T|common: [nn]streaming test: could not get input data
1745569678277|7f717d5740|T|common: [nn]streaming test runner func quit
1745569678291|7f71fe5740|I|common: [nn]GeneralModelOutputListener detect from , the result: (box: 946.88 150.00 329.25 898.50) --> label: 0(person), confidence: 0.93, fin: false
1745569678291|7f71fe5740|I|common: [nn]GeneralModelOutputListener detect from , the result: (box: 680.06 274.88 274.88 698.25) --> label: 0(person), confidence: 0.82, fin: false
1745569678291|7f71fe5740|I|common: [nn]GeneralModelOutputListener detect from , the result: (box: 1209.00 655.88 94.50 191.25) --> label: 26(handbag), confidence: 0.35, fin: true
1745569678394|7f71fe5740|I|common: [nn]GeneralModelOutputListener detect from , the result: (box: 956.25 147.00 331.50 913.50) --> label: 0(person), confidence: 0.93, fin: false
1745569678394|7f71fe5740|I|common: [nn]GeneralModelOutputListener detect from , the result: (box: 683.62 267.75 258.75 706.50) --> label: 0(person), confidence: 0.87, fin: false
1745569678394|7f71fe5740|I|common: [nn]GeneralModelOutputListener detect from , the result: (box: 1208.62 663.00 102.75 184.50) --> label: 26(handbag), confidence: 0.54, fin: true
1745569678490|7f71fe5740|I|common: [nn]GeneralModelOutputListener detect from , the result: (box: 960.00 148.88 325.50 908.25) --> label: 0(person), confidence: 0.93, fin: false
1745569678490|7f71fe5740|I|common: [nn]GeneralModelOutputListener detect from , the result: (box: 682.12 270.00 261.75 717.00) --> label: 0(person), confidence: 0.88, fin: false
1745569678490|7f71fe5740|I|common: [nn]GeneralModelOutputListener detect from , the result: (box: 1208.62 663.00 102.75 184.50) --> label: 26(handbag), confidence: 0.57, fin: true
1745569678580|7f71fe5740|I|common: [nn]GeneralModelOutputListener detect from , the result: (box: 966.75 148.50 331.50 910.50) --> label: 0(person), confidence: 0.93, fin: false
1745569678580|7f71fe5740|I|common: [nn]GeneralModelOutputListener detect from , the result: (box: 680.62 269.62 261.75 705.75) --> label: 0(person), confidence: 0.88, fin: false
1745569678580|7f71fe5740|I|common: [nn]GeneralModelOutputListener detect from , the result: (box: 1217.62 671.25 96.75 183.00) --> label: 26(handbag), confidence: 0.45, fin: true
1745569678677|7f71fe5740|I|common: [nn]GeneralModelOutputListener detect from , the result: (box: 979.50 155.62 328.50 900.75) --> label: 0(person), confidence: 0.92, fin: false
1745569678677|7f71fe5740|I|common: [nn]GeneralModelOutputListener detect from , the result: (box: 676.50 262.31 265.50 711.38) --> label: 0(person), confidence: 0.88, fin: false
1745569678677|7f71fe5740|I|common: [nn]GeneralModelOutputListener detect from , the result: (box: 1226.62 673.50 93.75 192.00) --> label: 26(handbag), confidence: 0.28, fin: true
1745569678698|7f8b17b040|I|common: [nn]Yolo11sDetectionHybridI8 out, retVal: -0x0
d. Model inference results save to image.
./bin/nnModel -m Yolo11sPoseHybridI8 -o pose_detected_output.jpg
# ./bin/nnModel -m Yolo11sPoseHybridI8 -o pose_detected_output.jpg
1745569756838|7fb27a5040|I|common: [nn]Yolo11sPoseHybridI8 in
1745569757586|7f9b449740|I|common: [nn]outTensorData size: [470400,3]
1745569757594|7f9b449740|I|common: [nn]plot: 0 93%, [(854, 145) - (1170, 755)], person
1745569757601|7f9b449740|I|common: [nn]plot: 0 90%, [(1690, 188) - (1837, 644)], person
1745569757602|7f9b449740|I|common: [nn]plot: 0 89%, [(65, 125) - (233, 606)], person
1745569757603|7f9b449740|I|common: [nn]plot: 0 87%, [(1336, 333) - (1447, 683)], person
1745569757603|7f9b449740|I|common: [nn]plot: 0 86%, [(372, 253) - (481, 676)], person
1745569757733|7fb27a5040|I|common: [nn]Yolo11sPoseHybridI8 out, retVal: -0x0
e. Model inference results save to json file.
./bin/nnModel -m Yolo11sPoseHybridI8 -o yolo11sPoseResults.json
# ./bin/nnModel -m Yolo11sPoseHybridI8 -o yolo11sPoseResults.json
1745569890583|7fa51cb040|I|common: [nn]Yolo11sPoseHybridI8 in
1745569890830|7f8de69740|I|common: [nn]outTensorData size: [470400,3]
1745569890847|7fa51cb040|I|common: [nn]Yolo11sPoseHybridI8 out, retVal: -0x0
How to verify Bare Metal Mode with python interface
Install related environment.
sudo apt update
sudo apt install -y swig
sudo apt-get install python3-dev
sudo pip install opencv-python
You can use the python sample we provide to start as follows:
root@ubuntu:/smzf/release# python3 ./python_res/bmi/nn_bare_metal_mode_test.py
Usage: python3 ./python_res/bmi/nn_bare_metal_mode_test.py [-m|-h|-i]
[-m,--model <model>] run bare metal model
<model>
AgeU8 Det10gI16 GenderAgeU8
HumanAttrHybridU8 HumanAttrI16 LightFaceHybridU8
LightFaceI16 OcrClsI16 OcrDetI16
OcrRecI16 ReidI16 RtmdetsI16
StgcnI16 VehicleAttrHybridU8 VehicleAttrI16
W600kR50I16 Yolo11sClassifyHybridI8 Yolo11sDetectionHybridI8
Yolo11sObbHybridI8 Yolo11sPoseHybridI8 Yolo11sSegmentationHybridI8
Yolov10sDetectionHybridI8 Yolov5sDetectionU8 Yolov8nCcpdDetectionOptiU8
Yolov8nClassifyHybridI8 Yolov8nDetectionOptiI16 Yolov8nObbOptiI16
Yolov8nPoseOptiI16 Yolov8nSegmentationOptiI16 Yolov8sClassifyHybridI8
Yolov8sDetectionHybridI8 Yolov8sDetectionI16 Yolov8sDetectionOptiI16
Yolov8sObbHybridI8 Yolov8sPoseI16 Yolov8sSegmentationHybridI8
Yolov8sSegmentationI16
[-i,--image file] set image file to nn inference.
example:
python3 ./python_res/bmi/nn_bare_metal_mode_test.py -m Yolo11sDetectionHybridI8 -i resource/image/person.jpg
Just run python test sample as example, it will preprocess the jpg with python flow, then inference the dtype data with NPU, and python flow will get the output tensor data, postprocess will be done in python flow.
root@ubuntu:/smzf/release# python3 ./python_res/bmi/nn_bare_metal_mode_test.py -m Yolo11sDetectionHybridI8 -i resource/image/person.jpg
Prepare model...
Inference...
Result:
{(8400, 84, 1): array([13.8359375, 23.1875 , 34.9375 , ..., 0. , 0. ,
0. ], shape=(705600,))}
How to build SMZF
Cross-compile for C3V environment.
Please use
config.sh
firstly for SMZF config according to Model-Zoo's model information.Please use
build.sh
for SMZF compiling../smzf/
build.sh
--clean --> means thatmake clean
the smzf../smzf/
build.sh
--clean --cross --> means thatmake clean;make
the smzf../smzf/
build.sh
--clean --cross --python --> means that generate both bin and python module for smzf.
All the resources will be installed to the release folder.
Copy the release folder to the C3V platform.
Setup environment variable.
a. Setting environment variables independently.
#!/bin/sh
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:./lib
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:./nnthirdparty/NADKLogger/lib
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:./nnthirdparty/jsoncpp/lib
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:./nnthirdparty/NADKRoutines/lib
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:./nnthirdparty/ffmpeg/lib
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:./nnthirdparty/opencv/lib
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:./nnthirdparty/libtorch/lib
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:./nnthirdparty/nnbase/lib
b. Run source env.sh
will set environment variables.
Then, you can run ./bin/nnModel for SMZF demo.
If you have any modification of apps code in the release folder, you can use ./apps/
build_apps.sh
to build the modified demo/mAP sample code../apps/
build_apps.sh
--clean --> means thatmake clean
the apps../apps/
build_apps.sh
--clean --cross --> means thatmake clean;make
the apps../apps/
build_apps.sh
--clean --cross --python --> means that generate both bin and python module for apps.
Performance
Please refer to <Performance of SMZF V1.0.0>.
Accuracy
Please refer to <Typical mAP data on C3V>.