/
V2.0.0 of SNNF

V2.0.0 of SNNF

This is the formal release V2.0.0 of SNNF(Sunplus Neural Network Framework).

Target of the V2.0.0

This version is based on SNNF1.2.0, which first implements the separation of Model-Zoo and adds some typical sample Python interfaces:

  1. Split the current SNNF and restructure the entire system, with the bottom layer being model zoo (viewed as a common resource) and the upper layers being in the form of SNNF, mAP, etc; Make our architecture more in line with the principles of high cohesion and low coupling, and more flexible and user-friendly.

  2. Add Python interfaces for SNNF user API.

    1. Demosingle model, sequential flow and mAP test sample is ready now.

    2. Please get the xxx_usage.txt and xxx_test.py for your test.

  3. The rest SNNF and the separated model zoo will be recombined to achieve functionality equivalent to SNNF1.2.0. This goal will be achieved through the combination of SNNF2.0.1 and Model-Zoo1.0.1.

SNNF V2.0.0 Could be run with Model-Zoo V1.0.0 now.

Resource

Please get the V2.0.0 release resource here, tag:V2.0.0.

Usage of V2.0.0

How to verify Official Demos

You can use the script we provide to start as follows:

/SNNF/release # ./snnf_demo.sh Usage: ./bin/snnf_demo [-m|-s|-a|-h] [-i|-v|-o|-p|option] Version: 2.0.0_commit a6021c8e89b467f69f0a10e6cdac589fb8cafabb Time: Date: Tue Apr 1 13:32:34 2025 +0800 [-m,--model <model>] run a single model <model>:AgeU8 GenderAgeU8 HumanAttrHybridU8 HumanAttrI16 OcrClsI16 OcrDetI16 OcrRecI16 VehicleAttrI16 Yolo11sClassifyHybridI8 Yolo11sDetectionHybridI8 Yolo11sObbHybridI8 Yolo11sPoseHybridI8 Yolo11sSegmentationHybridI8 Yolov10sDetectionHybridI8 Yolov8nCcpdDetectionOptiU8 Yolov8sClassifyHybridI8 Yolov8sDetectionHybridI8 Yolov8sDetectionI16 Yolov8sPoseI16 example:./bin/snnf_demo -m Yolov8sDetectionI16 ./bin/snnf_demo --model HumanAttrI16 [-s,--sequential <model1,model2,...>] run sequential models <models>:Yolo11sDetectionHybridI8,HumanFilter,HumanAttrI16 OcrDetI16,OcrRecI16 OcrDetI16,OcrClsI16,OcrRecI16 Yolov8sDetectionHybridI8,VehicleFilter,VehicleAttrI16 Yolov10sDetectionHybridI8,BotSortTrack example:./bin/snnf_demo -s Yolov5sDetection,HumanFilter,HumanAttr ./bin/snnf_demo --sequential OcrDetI16,OcrClsI16,OcrRecI16 ./bin/snnf_demo -s YoloV8sDetectionOpti,BotSortTrack,videoWriter -v resource/video/humanTracking.mp4 [-i,--image file] set image file to nn detection. <file>: file name [-c | option]: test count, this parameter is only match with -i example:./bin/snnf_demo -s Yolo11sDetectionHybridI8,HumanFilter,HumanAttrI16 -i resource/image/person.jpg -c 2 ./bin/snnf_demo -s Yolo11sDetectionHybridI8,HumanFilter,HumanAttrI16 --image resource/image/person.jpg -c 2 [-v,--video file] set video file to nn detection. <file>: file name example:./bin/snnf_demo -s Yolo11sDetectionHybridI8,BotSortTrack -v resource/video/humanTracking.mp4 ./bin/snnf_demo -s Yolo11sDetectionHybridI8,BotSortTrack --video resource/video/humanTracking.mp4 [-o,--output file] specify the output file name for saving results. <file>: file name with extension (e.g., output.jpg, output.json, output.mp4) example:./bin/snnf_demo -s Yolo11sDetectionHybridI8,HumanFilter,HumanAttrI16 -i resource/image/person.jpg -o output.jpg ./bin/snnf_demo -s Yolo11sDetectionHybridI8,HumanFilter,HumanAttrI16 -i resource/image/person.jpg -o output.json ./bin/snnf_demo -s Yolo11sDetectionHybridI8,BotSortTrack -v resource/video/humanTracking.mp4 -o output.mp4 [-a,--all] run all model testing [-p,--performance] enable performance test assist tools: BotSortTrack BotSortTrackStgcn HumanFilter VehicleFilter

Release folder structure

image-20250402-081015.png
  • bin: Some prebuild application.

    • snnf_demo. Prebuild demo programs that can run on the c3v Linux platform. Just a demo showcasing the functionality of SNNF.

    • snnf_yolov8s_map. Prebuild sample for mAP test. Customers can follow the sample code here to improve similar features they want.

  • cmake:The cmake config both for build on C3V and cross compiling.

  • demo:The source code of snnf_demo. Include C++ and python mode.

  • include:header file of sunplus NN framework SDK.

  • lib:libraries of sunplus NN framework SDK.

  • model_config.mk:Model selection both for demo and sample.

  • python_res:The python interface related libs and resources.

  • resource

    • config: some config files for features.

    • font: ttf file for plotting sample.

    • image: image files used for test.

    • models: models to be used in the sample program, which are refer to Model-Zoo.

    • video: video files used for test.

  • samples:sample code for using sunplus NN framework, only yolov8s_map is verified and available now.

  • snnf_build_demo.sh:executable script for building demo code.

  • snnf_build_samples.sh: executable script for building sample code, only yolov8s_map will be built now.

  • snnf_demo.sh:executable script for running demo code.

  • snnf_env.sh: executable script for compiling environment.

  • thirdparty: just as its name implies.

How to run SNNF demo

  1. Copy the release foler to C3V Linux.

/SNNF/release # ls -alh total 512K drwxr-xr-x 11 root root 32K Apr 2 2025 . drwxr-xr-x 3 root root 32K Apr 1 02:26 .. drwxr-xr-x 2 root root 32K Apr 2 2025 bin drwxr-xr-x 2 root root 32K Apr 2 2025 cmake drwxr-xr-x 4 root root 32K Apr 2 2025 demo drwxr-xr-x 7 root root 32K Apr 2 2025 include drwxr-xr-x 2 root root 32K Apr 2 2025 lib -rwxr-xr-x 1 root root 708 Apr 2 2025 model_config.mk drwxr-xr-x 4 root root 32K Apr 2 2025 python_res drwxr-xr-x 7 root root 32K Apr 2 2025 resource drwxr-xr-x 6 root root 32K Apr 2 2025 samples -rwxr-xr-x 1 root root 907 Apr 2 2025 snnf_build_demo.sh -rwxr-xr-x 1 root root 1.1K Apr 2 2025 snnf_build_samples.sh -rwxr-xr-x 1 root root 493 Apr 2 2025 snnf_demo.sh -rwxr-xr-x 1 root root 2.0K Apr 2 2025 snnf_env.sh drwxr-xr-x 9 root root 32K Apr 2 2025 thirdparty
  1. Run nnf_run.sh to run the SNNF demo.

a. One-time input

./snnf_demo.sh -m Yolo11sDetectionHybridI8

# ./snnf_demo.sh -m Yolo11sDetectionHybridI8 1743500227803|7f8fb99040|T|common: [app]Yolo11sDetectionHybridI8 in 1743500228525|7f78f35740|T|common: [nn]GeneralModelOutputListener detect from resource/image/person.jpg, the result: (box: 612.90 149.38 274.48 645.11) --> label: 0(person), confidence: 0.95, fin: false 1743500228525|7f78f35740|T|common: [nn]GeneralModelOutputListener detect from resource/image/person.jpg, the result: (box: 314.62 175.28 210.53 620.37) --> label: 0(person), confidence: 0.93, fin: false 1743500228525|7f78f35740|T|common: [nn]GeneralModelOutputListener detect from resource/image/person.jpg, the result: (box: 448.94 286.61 66.52 177.38) --> label: 26(handbag), confidence: 0.73, fin: true 1743500228541|7f8fb99040|T|common: [app]Yolo11sDetectionHybridI8 out, retVal: -0x0

b. Read input from the image file

./snnf_demo.sh -m Yolo11sDetectionHybridI8 -i resource/image/person640x640.jpg

# ./snnf_demo.sh -m Yolo11sDetectionHybridI8 -i resource/image/person640x640.jpg 1743500300752|7f97c49040|T|common: [app]Yolo11sDetectionHybridI8 in 1743500301037|7f80fe5740|T|common: [nn]GeneralModelOutputListener detect from resource/image/person640x640.jpg, the result: (box: 0.00 14.00 626.00 622.00) --> label: 0(person), confidence: 0.87, fin: true 1743500301052|7f97c49040|T|common: [app]Yolo11sDetectionHybridI8 out, retVal: -0x0

c. Read inputs from the video file.

./snnf_demo.sh -m Yolo11sDetectionHybridI8 -v resource/video/humanCount.mp4

# ./snnf_demo.sh -m Yolo11sDetectionHybridI8 -v resource/video/humanCount.mp4 1743501540427|7f96aa7040|T|common: [nn]Press: 'q' or 'Q' to quit the test 1743501540427|7f7d5f5740|T|common: [app]streaming test: runner func in 1743501540746|7f7de05740|T|common: [nn]GeneralModelOutputListener detect from , the result: (box: 673.88 130.50 302.25 870.00) --> label: 0(person), confidence: 0.92, fin: false 1743501540746|7f7de05740|T|common: [nn]GeneralModelOutputListener detect from , the result: (box: 47.81 364.88 221.62 464.25) --> label: 61(toilet), confidence: 0.36, fin: false 1743501540746|7f7de05740|T|common: [nn]GeneralModelOutputListener detect from , the result: (box: 48.19 364.50 220.12 465.00) --> label: 72(refrigerator), confidence: 0.30, fin: true 1743501540843|7f7de05740|T|common: [nn]GeneralModelOutputListener detect from , the result: (box: 674.25 128.25 310.50 871.50) --> label: 0(person), confidence: 0.92, fin: false 1743501540845|7f7de05740|T|common: [nn]GeneralModelOutputListener detect from , the result: (box: 47.81 364.50 221.81 463.50) --> label: 61(toilet), confidence: 0.40, fin: false 1743501540846|7f7de05740|T|common: [nn]GeneralModelOutputListener detect from , the result: (box: 49.12 364.12 219.38 462.75) --> label: 72(refrigerator), confidence: 0.31, fin: true 1743501540939|7f7de05740|T|common: [nn]GeneralModelOutputListener detect from , the result: (box: 676.50 126.00 309.00 874.50) --> label: 0(person), confidence: 0.93, fin: false 1743501540939|7f7de05740|T|common: [nn]GeneralModelOutputListener detect from , the result: (box: 48.75 364.50 220.69 466.50) --> label: 61(toilet), confidence: 0.43, fin: false 1743501540940|7f7de05740|T|common: [nn]GeneralModelOutputListener detect from , the result: (box: 48.75 364.50 220.69 466.50) --> label: 72(refrigerator), confidence: 0.31, fin: true ...... 1737541664021|7f97f66040|T|common: [app]q to quit q 1737541665502|7f97f66040|T|common: [app]The input file: resource/video/humanCount.mp4 has 516 frames 1737541665502|7f97f66040|T|common: [app]streaming out, retVal: -0x0

d. Sequential models

./snnf_demo.sh -s Yolo11sDetectionHybridI8,HumanFilter,HumanAttrI16

# ./snnf_demo.sh -s Yolo11sDetectionHybridI8,HumanFilter,HumanAttrI16 1743501764050|7f905e7040|T|common: [app]sequential in 1743501764320|7f905e7040|T|common: [app]input image name: resource/image/person.jpg 1743501764462|7f75f51740|T|common: [nn]human attr(box: 612.90 149.38 274.48 645.11) --> result: age: 18-60 bag: No bag direction: Front gender: Male glasses: True hat: False holdObjectsInFront: False lower: Trousers shose: No boots upper: ShortSleeve UpperStride 1743501764465|7f75f51740|T|common: [nn]human attr(box: 314.62 175.28 210.53 620.37) --> result: age: 18-60 bag: ShoulderBag direction: Back gender: Female glasses: False hat: False holdObjectsInFront: False lower: LowerPattern Shorts shose: No boots upper: ShortSleeve 1743501764472|7f905e7040|T|common: [app]sequential out, retVal: -0x0

e. Model inference results save to image.

./snnf_demo.sh -m Yolo11sPoseHybridI8 -o pose_detected_output.jpg

# ./snnf_demo.sh -m Yolo11sPoseHybridI8 -o pose_detected_output.jpg 1743501884763|7fad719040|T|common: [app]Yolo11sPoseHybridI8 in 1743501885651|7fad719040|T|common: [app]Yolo11sPoseHybridI8 out, retVal: -0x0
image-20250402-081419.png
Results will save to the image pose_detected_output.jpg.

f. Model inference results save to json file.

./snnf_demo.sh -m Yolo11sPoseHybridI8 -o yolo11sPoseResults.json

# ./snnf_demo.sh -m Yolo11sPoseHybridI8 -o yolo11sPoseResults.json 1743502112289|7face59040|T|common: [app]Yolo11sPoseHybridI8 in 1743502112563|7face59040|T|common: [app]Yolo11sPoseHybridI8 out, retVal: -0x0
image-20250402-081601.png
Results will be saved to: yolo11sPoseResults.json. If the - o option is unused, there will not be any json file for the detected results.

How to build SNNF

  1. Cross-compile for C3V environment.

    • Please use snnf_config.shfirstly for SNNF config according to Model-Zoo's model information.

    • Please use snnf_build.sh for SNNF compiling.

    • All the resources will be installed to the release folder.

  2. Copy the release folder to the C3V platform.

  3. Setup environment variable.

a. Setting environment variables independently with command source snnf_env.sh.

#!/bin/sh export LD_LIBRARY_PATH=${PWD}/lib:${LD_LIBRARY_PATH} export LD_LIBRARY_PATH=${PWD}/thirdparty/libpng/lib:${LD_LIBRARY_PATH} export LD_LIBRARY_PATH=${PWD}/thirdparty/pytorch/lib:${LD_LIBRARY_PATH} export LD_LIBRARY_PATH=${PWD}/thirdparty/freetype/lib:${LD_LIBRARY_PATH} export LD_LIBRARY_PATH=${PWD}/thirdparty/opencv4/lib:${LD_LIBRARY_PATH} export LD_LIBRARY_PATH=${PWD}/thirdparty/ffmpeg/lib:${LD_LIBRARY_PATH}

b. Run snnf_demo.sh will auto set environment variables.

  1. Then, you can run snnf_run.sh for SNNF demo.

  2. If you have any modification of demo code in the release folder, you can use snnf_build_demo.sh to build the modified demo code.

  3. If you have any modification of samples code in the release folder, you can use snnf_build_samples.sh to build the modified samples code.

SNNF Sample introduction

Please refer to 《Models' Guide》.

User API

Please refer to API DOC v2.0 .