V1.0.0 of SNNF
Target of the V1.0.0
This version is the first time, we formal release the SNNF(Sunplus Neural Network Framework), the original version is V1.0.0.
There are some release notes for the version which are as below:
There are some NN modules and Samples.
lightFace;
age;
lightFace + age;
det_10g;
det_10g + w600k_r50;
humanAttr;
yolov5s + humanAttr;
vehicleAttr;
yolov8s_detection + vehicleAttr;
yolov8s_detection;
yolov8s_obb;
yolov8s_pose;
yolov8s_segment;
yolov8s_classify;
yolov10s_detection;
yolov8n_obb_opti;
yolov8n_pose_opti;
yolov8n_segment_opti;
yolov8n_detection_opti;
yolov8n_classify;
RTMDet-s;
Human tracking;
Human falling detection;
yolov5s_detection;
OCRDet;
OCRCls;
OCRRec;
OCRDet + OCRRec;
OCRDet + OCRCls + OCRRec;
yolov8n_CCPD;
yolov8n_CCPD + OCRRec;
yolov8n_CCPD + OCRCls + OCRRec.
A command mechanism for model list, you can get the all the available models,combination of models and applications and so on, which showed above.
Provide a sample demo for users to refer to, and they can directly run the sample app to verify the NN environment of C3V.
Parallel operations in multi-model applications can maximize the utilization of NPU and CPU, ultimately maximizing the processing frame rate of the application.
Implement official and customized partitions, you can copy an offical model to customized zone for further customizing, or just put your own model in the customized zone as a new model.
Define some specified paths for some resource, for example, config, font, test image, model file, test video and so on.
Define some specified folders such as botSortTrack, imageWriter, videoWriter for assist.
Provide Libs and header files for users to integrate specific NN modules into their applications using the standard interfaces we provide.
Provide COMPILE SH to enable users to easily compile SNNF and locate generated resources in the specified release directory.
Open source the SNNF to facilitate users to better understand the internal operations of SNNF.
Optimize the system to make it more robust, and after long-term testing, the system is stable.
Resource
Please get the V1.0.0 release resource here.
Before starting any work, please carefully read the instruction files such as readme.md in the document directory.
Usage of V1.0.0
How to verify Official Demos
You can use the script we provide to start as follows:
/SNNF/release # ./snnf_run.sh
Usage: ./bin/snnf_nnsample [-m|-s|-a|-h] [-i|-v|option]
Version: 1.0.0_
Time:
[-m,--model <model>] run a single model
<model>:Age Det10g HumanAttr
LightFace OcrCls OcrDet
OcrRec VehicleAttr W600kR50
Yolov5sDetection Yolov5sV1 Yolov5sV2
Yolov8nClassify Yolov8sClassify Yolov5sV3
BotSortTrackStgcn Rtmdets YoloV10sDetection
YoloV8nCcpd YoloV8nDetectionBaseOpti YoloV8nDetectionOpti
YoloV8nObbOpti YoloV8nPoseOpti YoloV8nSegmentOpti
YoloV8sDetection YoloV8sDetectionBaseOpti YoloV8sDetectionOpti
YoloV8sObb YoloV8sPose YoloV8sSegment
example:./bin/snnf_nnsample -m Yolov5sDetection
./bin/snnf_nnsample --model HumanAttr
[-s,--sequential <model1,model2,...>] run sequential models
<models>:Yolov5sDetection,HumanAttr
LightFace,Age
OcrDet,OcrRec
OcrDet,OcrCls,OcrRec
YoloV8nCcpd,OcrRec
Det10g,W600kR50
YoloV8sDetection,VehicleAttr
YoloV8nDetectionOpti,BotSortTrack
YoloV8nPoseOpti,BotSortTrackStgcn
YoloV8nCcpd,OcrRec
YoloV8nCcpd,OcrCls,OcrRec
example:./bin/snnf_nnsample -s Yolov5s,HumanAttr
./bin/snnf_nnsample --sequential ocrDet,ocrCls,ocrRec
./bin/snnf_nnsample -s YoloV8nCcpd,OcrRec,imageWriter
./bin/snnf_nnsample -s YoloV8nDetectionOpti,BotSortTrack,videoWriter -v resource/video/humanCount.mp4
./bin/snnf_nnsample -s YoloV8nPoseOpti,BotSortTrackStgcn,videoWriter -v resource/video/person-falling.mp4
[-i,--image file] set image file to nn detection.
<file>: file name
[-c | option]: test count, this parameter is only match with -i
example:./bin/snnf_nnsample -s Yolov5sDetection,HumanAttr -i filename -c testCount
./bin/snnf_nnsample -s Yolov5sDetection,HumanAttr --image filename -c testCount
[-v,--video file] set video file to nn detection.
<file>: file name
example:./bin/snnf_nnsample -s Yolov5sDetection,HumanAttr -v filename
./bin/snnf_nnsample -s Yolov5sDetection,HumanAttr --video filename
[-a,--all] run all model testing
assist tools: imageWriter videoWriter BotSortTrack
Release folder structure
bin: nnf_nnsample. Prebuild sample programs that can run on the c3v Linux platform.
include:header file of NN framework SDK.
lib:libraries of NN framework SDK.
resource
config: some config files for features.
font: ttf file for plotting sample.
image: image files used for test.
model: models to be used in the sample program.
video: video files used for test.
samples:example code for using NN framework.
snnf_run.sh:executable script for running sample code.
thirdparty: just as its name implies.
How to run NN framework sample
Copy the release foler to C3V Linux.
/SNNF/release # ls -alh
total 36
drwxr-xr-x 8 xxx B400 4096 Sep 30 14:23 ./
drwxr-xr-x 15 xxx B400 4096 Sep 30 14:15 ../
drwxr-xr-x 2 xxx B400 4096 Sep 30 14:15 bin/
drwxr-xr-x 6 xxx B400 4096 Sep 30 14:15 include/
drwxr-xr-x 3 xxx B400 4096 Sep 30 14:15 lib/
drwxr-xr-x 7 xxx B400 4096 Sep 30 14:15 resource/
drwxr-xr-x 4 xxx B400 4096 Sep 30 14:15 samples/
-rwxr-xr-x 1 xxx B400 262 Sep 30 14:15 snnf_run.sh*
drwxr-xr-x 7 xxx B400 4096 Sep 30 14:15 thirdparty/
Run nnf_run.sh to run the NNF sample.
a. One-time input
./snnf_run.sh -m YoloV8sDetection
#./snnf_run.sh -m YoloV8sDetection
1727625905242|7fad06c020|T|common: [app]YoloV8sDetection in
1727625905261|7fad06c020|I|common: [nn]create model from pluginName: YoloV8sDetection takes: 17
1727625905722|7f96fdf0e0|I|common: [nn]picked: 5
1727625905722|7f96fdf0e0|T|common: [app]GeneralModelOutputListener detect from resource/image/objectDetect.jpg, the result: (box: 35 196 158 403) --> label: 0(person), confidence: 0.89, fin: false
1727625905722|7f96fdf0e0|T|common: [app]GeneralModelOutputListener detect from resource/image/objectDetect.jpg, the result: (box: 526 185 112 392) --> label: 0(person), confidence: 0.87, fin: false
1727625905722|7f96fdf0e0|T|common: [app]GeneralModelOutputListener detect from resource/image/objectDetect.jpg, the result: (box: 173 203 97 360) --> label: 0(person), confidence: 0.86, fin: false
1727625905722|7f96fdf0e0|T|common: [app]GeneralModelOutputListener detect from resource/image/objectDetect.jpg, the result: (box: 0 321 44 253) --> label: 0(person), confidence: 0.75, fin: false
1727625905722|7f96fdf0e0|T|common: [app]GeneralModelOutputListener detect from resource/image/objectDetect.jpg, the result: (box: 12 66 625 403) --> label: 5(bus), confidence: 0.74, fin: true
1727625905781|7fad06c020|T|common: [app]YoloV8sDetection out, retVal: 0
b. Read input from the image file
./snnf_run.sh -m YoloV8nDetectionOpti -i resource/image/person640x640.jpg
# ./snnf_run.sh -m YoloV8nDetectionOpti -i resource/image/person640x640.jpg
1727626580087|7fbd7de020|T|common: [app]YoloV8nDetectionOpti in
1727626582152|7fbd7de020|I|common: [nn]create model from pluginName: YoloV8nDetectionOpti takes: 2063
1727626582574|7fa569a0e0|T|common: [app]GeneralModelOutputListener detect from resource/image/person640x640.jpg, the result: (box: 0 19 614 619) --> label: 0(person), confidence: 0.87, fin: true
1727626582622|7fbd7de020|T|common: [app]YoloV8nDetectionOpti out, retVal: 0
c. Read inputs from the video file.
./snnf_run.sh -m YoloV8nDetectionOpti -v resource/video/humanCount.mp4
# ./snnf_run.sh -m YoloV8nDetectionOpti -v resource/video/humanCount.mp4
1727626644381|7faf81e020|T|common: [app]streaming in
1727626645952|7faf81e020|I|common: [nn]create model from pluginName: YoloV8nDetectionOpti takes: 1388
1727626646044|7f951d80e0|T|common: [app]streaming test: runner func in
1727626646304|7f959e80e0|T|common: [app]GeneralModelOutputListener detect from , the result: (box: 671 129 307 873) --> label: 0(person), confidence: 0.90, fin: false
1727626646305|7f959e80e0|T|common: [app]GeneralModelOutputListener detect from , the result: (box: 0 364 268 461) --> label: 7(truck), confidence: 0.38, fin: true
1727626646375|7f959e80e0|T|common: [app]GeneralModelOutputListener detect from , the result: (box: 672 127 325 873) --> label: 0(person), confidence: 0.90, fin: false
1727626646375|7f959e80e0|T|common: [app]GeneralModelOutputListener detect from , the result: (box: 0 364 267 461) --> label: 7(truck), confidence: 0.39, fin: true
1727626646440|7f959e80e0|T|common: [app]GeneralModelOutputListener detect from , the result: (box: 671 125 327 878) --> label: 0(person), confidence: 0.90, fin: false
1727626646440|7f959e80e0|T|common: [app]GeneralModelOutputListener detect from , the result: (box: 0 364 267 460) --> label: 7(truck), confidence: 0.41, fin: true
1727626646498|7f959e80e0|T|common: [app]GeneralModelOutputListener detect from , the result: (box: 673 125 323 878) --> label: 0(person), confidence: 0.90, fin: false
......
1727626682667|7f959e80e0|T|common: [app]GeneralModelOutputListener detect from , the result: (box: 977 159 330 894) --> label: 0(person), confidence: 0.91, fin: false
1727626682667|7f959e80e0|T|common: [app]GeneralModelOutputListener detect from , the result: (box: 668 262 270 708) --> label: 0(person), confidence: 0.86, fin: false
1727626682667|7f959e80e0|T|common: [app]GeneralModelOutputListener detect from , the result: (box: 1 363 267 457) --> label: 7(truck), confidence: 0.35, fin: false
1727626682667|7f959e80e0|T|common: [app]GeneralModelOutputListener detect from , the result: (box: 465 26 1454 1003) --> label: 6(train), confidence: 0.32, fin: false
1727626682667|7f959e80e0|T|common: [app]GeneralModelOutputListener detect from , the result: (box: 847 565 173 403) --> label: 0(person), confidence: 0.26, fin: true
1727626686753|7faf81e020|T|common: [app]q to quit
d. Sequential models
./snnf_run.sh -s Yolov5sDetection,HumanAttr
# ./snnf_run.sh -s Yolov5sDetection,HumanAttr
1727627713332|7fa3285020|T|common: [app]sequential in
1727627713501|7fa3285020|T|common: [app]input image name: resource/image/person.jpg
1727627713833|7f937c50e0|T|common: [nn]detectedInfos: 1
1727627713833|7f937c50e0|T|common: [app]human attr(box: 606 141 274 655) --> result:
Male
Age18-60
Direct: Front
Glasses: True
Hat: False
HoldObjectsInFront: False
Bag: No bag
Upper: ShortSleeve UpperStride
Lower: Trousers
Shose: No boots
1727627713833|7f937c50e0|T|common: [nn]detectedInfos: 1
1727627713833|7f937c50e0|T|common: [app]human attr(box: 308 188 207 591) --> result:
Female
Age18-60
Direct: Back
Glasses: False
Hat: False
HoldObjectsInFront: False
Bag: ShoulderBag
Upper: ShortSleeve UpperLogo
Lower: LowerPattern Shorts
Shose: No boots
1727627718787|7fa3285020|T|common: [app]sequential out, retVal: -0x0
e. Model inference results save to image.
./snnf_run.sh -s YoloV8sPose,imageWriter
# ./snnf_run.sh -s YoloV8sPose,imageWriter
1727627887700|7fafe27020|T|common: [app]sequential in
1727627887700|7fafe27020|T|common: [app]warning: sequential model list(not tested)
1727627887725|7fafe27020|I|common: [nn]create model from pluginName: YoloV8sPose takes: 24
1727627887858|7fafe27020|T|common: [app]input image name: resource/image/pose_input.jpg
1727627888707|7f9db2a0e0|I|common: [nn]picked: 5
1727627888707|7f9db2a0e0|I|common: [nn]plot: 0 91%, [(852, 142) - (1169, 753)], person
1727627888710|7f9db2a0e0|I|common: [nn]plot: 0 89%, [(1689, 187) - (1835, 642)], person
1727627888711|7f9db2a0e0|I|common: [nn]plot: 0 89%, [(61, 123) - (232, 601)], person
1727627888711|7f9db2a0e0|I|common: [nn]plot: 0 88%, [(1337, 330) - (1441, 679)], person
1727627888712|7f9db2a0e0|I|common: [nn]plot: 0 87%, [(369, 252) - (480, 671)], person
1727627888839|7f9db2a0e0|T|common: [app]write an image: detected_1883_0931_1727627888712.jpg
1727627893526|7fafe27020|T|common: [app]sequential out, retVal: -0x0
Results will save to the image detected_1883_0931_1727627888712.jpg
.
How to build SNNF
Cross-compile for C3V environment.
a. Please use snnf_build.sh
for SNNF compiling.
b. All the resource will installed to release folder.
Copy release folder to the C3V platform.
Setup environment variable.
a. Setting environment variables independently.
export LD_LIBRARY_PATH=${PWD}/lib:${PWD}/thirdparty/opencv4/lib:${PWD}/thirdparty/pytorch/lib:${PWD}/thirdparty/freetype/lib:${PWD}/thirdparty/libpng/lib:${LD_LIBRARY_PATH}
b. Run snnf_run.sh
will auto set environment variables.
Then, you can run snnf_run.sh for SNNF sample.
Models reference
Model Name | Version or Path |
Yolov5s | https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5s.pt |
Human Attributes | https://bj.bcebos.com/v1/paddledet/models/pipeline/PPLCNet_x1_0_person_attribute_945_infer.zip |
Light Face | |
Optical character recognition |
|
Age recognition | GitCode - 全球开发者的开源社区,开源代码托管平台 7c024d9d453c9b35a72a984d8821b5832ef17401 |
Yolov8 Detection | |
Yolov8 Pose | |
Yolov8 OBB | |
Yolov8 Segmentation | |
Yolov8 Classification | |
Vehicle attributes | https://bj.bcebos.com/v1/paddledet/models/pipeline/vehicle_attribute_model.zip |
License plate recognition | CCPD2020 Yolov8 CCPD detection OCR |
Yolov10 Detection | |
RTMDet | |
Face Recognition | |
Object tracking | BotSort |
Falling Recognition | STGCN |