Skip to end of metadata
Go to start of metadata

You are viewing an old version of this content. View the current version.

Compare with Current View Version History

« Previous Version 4 Next »

This is the formal release V1.2.0 of SNNF(Sunplus Neural Network Framework).

Target of the V1.2.0

On the basis of v1.1.0, v1.2.0 mainly has the following updates:

  1. The samples provided in the SDK release can provide some simple samples, such as a single model from creation, detection, and processing results.

  2. Modify sample->demo, unitest->sample.

  3. Two compilation methods, cmake and makefile, have been added to the new sample.

    • Makefile supports compiling sample code using the snnf_build.sh script.

    • Cmake supports compiling samples using the script snnf_build_sample.sh in the release directory.

    • Makefile supports compiling individual samples separately in the release directory.

  4. Organize the rotate function of OCRDet, provide an interface for the application layer to enable this function, and set the horizontal or vertical mode.

  5. Since you can now select some model to use from the model zoo, the model list in snnf_demo is correspond to the current selection.

  6. Whether it's compiling SNNF open source code or compiling SNNF release samples, there is a way for users to easily switch project config between official settings and customer settings.

    • The configuration of makefile_config_user.mk is already supported. If this file exists in the project directory, the configuration of this file shall prevail. If this file is not available, use the configuration in the default (makefile_config.mk) file.

    • In the two config files mentioned above, configuring the models of plugin or buildin can effectively control the release code size or bin size.

  7. Both buildin and plugin models also need to add a configure option for configuration.

  8. Adjust commands ./snnf_demo.sh -a, Make each test case have an independent delimiter and a specific test name, and ensure that every tested item can be tested. Ensure that every example of ./snnf_demo.sh is genuine and usable.

  9. Generate 《Models' Guide》 for SNNF sample introduction.

Resource

Please get the V1.2.0 release resource here.

Before starting any work, please carefully read the instruction files such as readme.md in the document directory.

Usage of V1.2.0

How to verify Official Demos

You can use the script we provide to start as follows:

/SNNF/release # ./snnf_demo.sh
Usage: ./bin/snnf_demo [-m|-s|-a|-h] [-i|-v|-o|option]
        Version: 1.2.0_V1.2.0
        Time: 2025-01-24 14:35:55 +0800
        [-m,--model <model>] run a single model
                <model>:Age                        Det10g                     HumanAttr
                        LightFace                  OcrCls                     OcrDet
                        OcrRec                     VehicleAttr                W600kR50
                        YoloV8sOdMap               Yolov5sDetection           Yolov5sV1
                        Yolov8nClassify            Yolov8sClassify            stgcn
                        Yolov5sV2                  BotSortTrack               BotSortTrackStgcn
                        GenderAge                  Rtmdets                    YoloV10sDetection
                        YoloV8nCcpdOpti            YoloV8nDetectionBaseOpti   YoloV8nDetectionOpti
                        YoloV8nObbOpti             YoloV8nPoseOpti            YoloV8nSegmentOpti
                        YoloV8sDetection           YoloV8sDetectionBaseOpti   YoloV8sDetectionOpti
                        YoloV8sObb                 YoloV8sPose                YoloV8sSegment

                example:./bin/snnf_demo -m Yolov5sDetection
                        ./bin/snnf_demo --model HumanAttr

        [-s,--sequential <model1,model2,...>] run sequential models
                <models>:Yolov5sDetection,HumanFilter,HumanAttr
                        LightFace,Age
                        OcrDet,OcrRec
                        OcrDet,OcrCls,OcrRec
                        Det10g,W600kR50
                        YoloV8sDetection,VehicleFilter,VehicleAttr
                        YoloV8nDetectionOpti,BotSortTrack
                        YoloV8nPoseOpti,BotSortTrackStgcn
                        YoloV8nCcpdOpti,OcrRec
                        YoloV8nCcpdOpti,OcrCls,OcrRec
                example:./bin/snnf_demo -s Yolov5sDetection,HumanFilter,HumanAttr
                        ./bin/snnf_demo --sequential OcrDet,OcrCls,OcrRec
                        ./bin/snnf_demo -s YoloV8nCcpdOpti,OcrRec,imageWriter
                        ./bin/snnf_demo -s YoloV8sDetectionOpti,BotSortTrack,videoWriter -v resource/video/humanTracking.mp4
                        ./bin/snnf_demo -s YoloV8nPoseOpti,BotSortTrackStgcn,videoWriter -v resource/video/person-falling.mp4

        [-i,--image file] set image file to nn detection.
                <file>: file name
                [-c | option]: test count, this parameter is only match with -i
                example:./bin/snnf_demo -s Yolov5sDetection,HumanFilter,HumanAttr -i resource/image/person.jpg -c 2
                        ./bin/snnf_demo -s Yolov5sDetection,HumanFilter,HumanAttr --image resource/image/person.jpg -c 2

        [-v,--video file] set video file to nn detection.
                <file>: file name
                example:./bin/snnf_demo -s YoloV8sDetectionOpti,BotSortTrack,videoWriter -v resource/video/humanTracking.mp4
                        ./bin/snnf_demo -s YoloV8sDetectionOpti,BotSortTrack,videoWriter --video resource/video/humanTracking.mp4

        [-o,--output file] specify the output file name for saving results.
                <file>: file name with extension (e.g., output.jpg, output.json, output.mp4)
                This parameter must be used in conjunction with imageWriter, jsonWriter, or videoWriter.
                example:./bin/snnf_demo -s Yolov5sDetection,HumanFilter,HumanAttr,imageWriter -i resource/image/person.jpg -o output.jpg
                        ./bin/snnf_demo -s Yolov5sDetection,HumanFilter,HumanAttr,jsonWriter -i resource/image/person.jpg -o output.json
                        ./bin/snnf_demo -s YoloV8sDetectionOpti,BotSortTrack,videoWriter -v resource/video/humanTracking.mp4 -o output.mp4

        [-a,--all] run all model testing

        assist tools: imageWriter videoWriter jsonWriter BotSortTrack HumanFilter VehicleFilter

Release folder structure

image-20250124-062842.png
  • bin: Some prebuild application.

    • snnf_demo. Prebuild demo programs that can run on the c3v Linux platform. Just a demo showcasing the functionality of SNNF.

    • snnf_sequential_model. Prebuild sample for sequantial flow. Customers can follow the sample code here to improve similar features they want.

    • snnf_single_model. Prebuild sample for single model. Customers can follow the sample code here to improve similar features they want.

    • snnf_tracking. Prebuild sample for human tracking or vehicle tracking. Customers can follow the sample code here to improve similar features they want.

    • snnf_yolov8s_map. Prebuild sample for mAP test. Customers can follow the sample code here to improve similar features they want.

  • cmake:The cmake config both for build on C3V and cross compiling.

  • demo:The source code of snnf_demo.

  • include:header file of sunplus NN framework SDK.

  • lib:libraries of sunplus NN framework SDK.

  • model_config.mk:Model selection both for demo and sample.

  • resource

    • config: some config files for features.

    • font: ttf file for plotting sample.

    • image: image files used for test.

    • model: models to be used in the sample program.

    • video: video files used for test.

  • samples:sample code for using sunplus NN framework.

  • snnf_build_demo.sh:executable script for building demo code.

  • snnf_build_samples.sh: executable script for building sample code.

  • snnf_demo.sh:executable script for running demo code.

  • snnf_env.sh: executable script for compiling environment.

  • thirdparty: just as its name implies.

How to run SNNF demo

  1. Copy the release foler to C3V Linux.

/SNNF/release # ls -alh
total 480K
drwxr-xr-x 10 root root  32K Jan 23  2025 .
drwxr-xr-x  3 root root  32K Jan 23  2025 ..
drwxr-xr-x  2 root root  32K Jan 23  2025 bin
drwxr-xr-x  2 root root  32K Jan 23  2025 cmake
drwxr-xr-x  5 root root  32K Jan 23  2025 demo
drwxr-xr-x  6 root root  32K Jan 23  2025 include
drwxr-xr-x  3 root root  32K Jan 23  2025 lib
-rwxr-xr-x  1 root root  839 Jan 23  2025 model_config.mk
drwxr-xr-x  7 root root  32K Jan 23  2025 resource
drwxr-xr-x  6 root root  32K Jan 23  2025 samples
-rwxr-xr-x  1 root root  907 Jan 23  2025 snnf_build_demo.sh
-rwxr-xr-x  1 root root 1.1K Jan 23  2025 snnf_build_samples.sh
-rwxr-xr-x  1 root root  493 Jan 23  2025 snnf_demo.sh
-rwxr-xr-x  1 root root 1.7K Jan 23  2025 snnf_env.sh
drwxr-xr-x  9 root root  32K Jan 23  2025 thirdparty
  1. Run nnf_run.sh to run the SNNF demo.

a. One-time input

./snnf_demo.sh -m YoloV8sDetection

# ./snnf_demo.sh -m YoloV8sDetection
1737541183779|7fa7539040|T|common: [app]YoloV8sDetection in
1737541185047|7f9504a080|T|common: [app]GeneralModelOutputListener detect from resource/image/vehicle.jpg, the result: (box: 415.31 288.56 1218.38 520.88) --> label: 2(car), confidence: 0.96, fin: true
1737541185097|7fa7539040|T|common: [app]YoloV8sDetection out, retVal: -0x0

b. Read input from the image file

./snnf_demo.sh -m YoloV8nDetectionOpti -i resource/image/person640x640.jpg

# ./snnf_demo.sh -m YoloV8nDetectionOpti -i resource/image/person640x640.jpg
1737541451015|7f98e21040|T|common: [app]YoloV8nDetectionOpti in
1737541455626|7f80b5a080|T|common: [app]GeneralModelOutputListener detect from resource/image/person640x640.jpg, the result: (box: 0.00 19.00 614.75 619.44) --> label: 0(person), confidence: 0.87, fin: true
1737541455668|7f98e21040|T|common: [app]YoloV8nDetectionOpti out, retVal: -0x0

c. Read inputs from the video file.

./snnf_demo.sh -m YoloV8nDetectionOpti -v resource/video/humanCount.mp4

# ./snnf_demo.sh -m YoloV8nDetectionOpti -v resource/video/humanCount.mp4
1737541527804|7f97f66040|T|common: [app]streaming in
1737541529610|7f7d798080|T|common: [app]streaming test: runner func in
1737541529821|7f7dfa8080|T|common: [app]GeneralModelOutputListener detect from , the result: (box: 672.19 131.25 303.19 873.38) --> label: 0(person), confidence: 0.90, fin: false
1737541529821|7f7dfa8080|T|common: [app]GeneralModelOutputListener detect from , the result: (box: 0.56 364.31 268.12 460.69) --> label: 7(truck), confidence: 0.39, fin: false
1737541529821|7f7dfa8080|T|common: [app]GeneralModelOutputListener detect from , the result: (box: 548.06 10.31 1369.88 1009.69) --> label: 6(train), confidence: 0.25, fin: true
1737541529888|7f7dfa8080|T|common: [app]GeneralModelOutputListener detect from , the result: (box: 673.88 127.88 323.06 875.44) --> label: 0(person), confidence: 0.90, fin: false
1737541529888|7f7dfa8080|T|common: [app]GeneralModelOutputListener detect from , the result: (box: 0.56 364.31 268.31 460.69) --> label: 7(truck), confidence: 0.40, fin: false
1737541529888|7f7dfa8080|T|common: [app]GeneralModelOutputListener detect from , the result: (box: 543.00 13.31 1375.31 1009.50) --> label: 6(train), confidence: 0.26, fin: true
1737541529957|7f7dfa8080|T|common: [app]GeneralModelOutputListener detect from , the result: (box: 675.56 126.00 318.00 877.12) --> label: 0(person), confidence: 0.90, fin: false
1737541529957|7f7dfa8080|T|common: [app]GeneralModelOutputListener detect from , the result: (box: 0.56 364.31 268.31 460.31) --> label: 7(truck), confidence: 0.44, fin: false
1737541529957|7f7dfa8080|T|common: [app]GeneralModelOutputListener detect from , the result: (box: 543.00 12.19 1375.12 1006.31) --> label: 6(train), confidence: 0.28, fin: true
......
1737541565951|7f7dfa8080|T|common: [app]GeneralModelOutputListener detect from , the result: (box: 851.91 564.42 160.64 405.05) --> label: 0(person), confidence: 0.30, fin: true
1737541566010|7f7dfa8080|T|common: [app]GeneralModelOutputListener detect from , the result: (box: 977.81 159.75 329.81 895.69) --> label: 0(person), confidence: 0.90, fin: false
1737541566010|7f7dfa8080|T|common: [app]GeneralModelOutputListener detect from , the result: (box: 666.94 263.81 275.06 708.00) --> label: 0(person), confidence: 0.87, fin: false
1737541566010|7f7dfa8080|T|common: [app]GeneralModelOutputListener detect from , the result: (box: 1583.39 458.81 64.27 130.03) --> label: 0(person), confidence: 0.33, fin: false
1737541566011|7f7dfa8080|T|common: [app]GeneralModelOutputListener detect from , the result: (box: 466.12 10.88 1453.88 1017.75) --> label: 6(train), confidence: 0.32, fin: true

1737541664021|7f97f66040|T|common: [app]q to quit
q
1737541665502|7f97f66040|T|common: [app]The input file: resource/video/humanCount.mp4 has 516 frames
1737541665502|7f97f66040|T|common: [app]streaming out, retVal: -0x0

d. Sequential models

./snnf_demo.sh -s Yolov5sDetection,HumanAttr

# ./snnf_demo.sh -s Yolov5sDetection,HumanAttr
1737541758322|7fa7beb040|T|common: [app]sequential in
1737541759189|7fa7beb040|T|common: [app]input image name: resource/image/person.jpg
1737541759278|7f96419080|T|common: [app]human attr(box: 612.44 156.84 268.88 625.51) --> result:
age: 18-60
bag: No bag
direction: Front
gender: Male
glasses: True
hat: False
holdObjectsInFront: False
lower: Trousers
shose: No boots
upper: ShortSleeve UpperStride
1737541759282|7f96419080|T|common: [app]human attr(box: 311.82 181.12 199.79 606.84) --> result:
age: 18-60
bag: ShoulderBag
direction: Back
gender: Female
glasses: False
hat: False
holdObjectsInFront: False
lower: LowerPattern Shorts
shose: No boots
upper: ShortSleeve
1737541764286|7fa7beb040|T|common: [app]sequential out, retVal: -0x0

e. Model inference results save to image.

./snnf_demo.sh -s YoloV8sPose,imageWriter

# ./snnf_demo.sh -s YoloV8sPose,imageWriter
1737541805564|7fa012c040|T|common: [app]sequential in
1737541806640|7fa012c040|T|common: [app]input image name: resource/image/pose_input.jpg
1737541806902|7f8dcba080|T|common: [app]write an image: detected_1883_0931_1737541806800.jpg
1737541811685|7fa012c040|T|common: [app]sequential out, retVal: -0x0

Results will save to the image detected_1883_0931_1737541806800.jpg.

f. Model inference results save to json file.

./snnf_demo.sh -s YoloV8sPose,jsonWriter -o yolov8PoseResults.json

# ./snnf_demo.sh -s YoloV8sPose,jsonWriter -o yolov8PoseResults.json
1737541840765|7fa1527040|T|common: [app]sequential in
1737541840967|7fa1527040|T|common: [app]input image name: resource/image/pose_input.jpg
1737541845994|7fa1527040|T|common: [app]sequential out, retVal: -0x0

Results will save to yolov8PoseResults.json. If the - o option is unused, it will be saved as a file by default: default_result.json.

How to build SNNF

  1. Cross-compile for C3V environment.

a. Please use snnf_build.sh for SNNF compiling.

b. All the resource will be installed to the release folder.

  1. Copy the release folder to the C3V platform.

  2. Setup environment variable.

a. Setting environment variables independently.

#!/bin/sh

export LD_LIBRARY_PATH=${PWD}/lib:${LD_LIBRARY_PATH}
export LD_LIBRARY_PATH=${PWD}/thirdparty/libpng/lib:${LD_LIBRARY_PATH}
export LD_LIBRARY_PATH=${PWD}/thirdparty/pytorch/lib:${LD_LIBRARY_PATH}
export LD_LIBRARY_PATH=${PWD}/thirdparty/freetype/lib:${LD_LIBRARY_PATH}
export LD_LIBRARY_PATH=${PWD}/thirdparty/opencv4/lib:${LD_LIBRARY_PATH}
export LD_LIBRARY_PATH=${PWD}/thirdparty/ffmpeg/lib:${LD_LIBRARY_PATH}

export LD_LIBRARY_PATH=${PWD}/lib/plugin/YoloV8sDetection:${LD_LIBRARY_PATH}
export LD_LIBRARY_PATH=${PWD}/lib/plugin/BotSortTrack:${LD_LIBRARY_PATH}
export LD_LIBRARY_PATH=${PWD}/lib/plugin/GenderAge:${LD_LIBRARY_PATH}
export LD_LIBRARY_PATH=${PWD}/lib/plugin/Rtmdets:${LD_LIBRARY_PATH}
export LD_LIBRARY_PATH=${PWD}/lib/plugin/Stgcn:${LD_LIBRARY_PATH}
export LD_LIBRARY_PATH=${PWD}/lib/plugin/YoloV10sDetection:${LD_LIBRARY_PATH}
export LD_LIBRARY_PATH=${PWD}/lib/plugin/YoloV5sV2:${LD_LIBRARY_PATH}
export LD_LIBRARY_PATH=${PWD}/lib/plugin/YoloV8nCcpdOpti:${LD_LIBRARY_PATH}
export LD_LIBRARY_PATH=${PWD}/lib/plugin/YoloV8nDetectionOpti:${LD_LIBRARY_PATH}
export LD_LIBRARY_PATH=${PWD}/lib/plugin/YoloV8nObbOpti:${LD_LIBRARY_PATH}
export LD_LIBRARY_PATH=${PWD}/lib/plugin/YoloV8nPoseOpti:${LD_LIBRARY_PATH}
export LD_LIBRARY_PATH=${PWD}/lib/plugin/YoloV8nSegmentOpti:${LD_LIBRARY_PATH}
export LD_LIBRARY_PATH=${PWD}/lib/plugin/YoloV8sDetection:${LD_LIBRARY_PATH}
export LD_LIBRARY_PATH=${PWD}/lib/plugin/YoloV8sDetectionOpti:${LD_LIBRARY_PATH}
export LD_LIBRARY_PATH=${PWD}/lib/plugin/YoloV8sObb:${LD_LIBRARY_PATH}
export LD_LIBRARY_PATH=${PWD}/lib/plugin/YoloV8sPose:${LD_LIBRARY_PATH}
export LD_LIBRARY_PATH=${PWD}/lib/plugin/YoloV8sSegment:${LD_LIBRARY_PATH}

b. Run snnf_demo.sh will auto set environment variables.

  1. Then, you can run snnf_run.sh for SNNF demo.

SNNF Sample introduction

Please refer to 《Models' Guide》.

User API

Please refer to API DOC v2.0 .

  • No labels