Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 15 Next »

Target of the gamma release

In this version, we have done the following work:

  1. Add some NN modules and Samples.

    1. Yolov8 Pose for human posture detection.

    2. Yolov8 OBB (oriented bounding box), can detect the peripheral direction of objects.

    3. Yolov8 Segmentation. The detected targets can be identified using masks.

    4. Yolov8 Classification. Each target in the detected image can be classified based on confidence level.

    5. Vehicle attributes. Can identify trucks, cars, buses, and their respective colors.

    6. License plate recognition. It can detect license plates and recognize their contents.

    7. Yolov10 Detection. Models for Object Detection.

    8. RTMDet. Real-time Models for Object Detection.

  2. Parallel operations in multi-model applications can maximize the utilization of NPU and CPU, ultimately maximizing the processing frame rate of the application.

  3. Implement official and customized partitions.

  4. Arrangement of some folder structs such as tool and algo and so on.

  5. The parameters of nnf_sample are designed in the standard Linux option (-, --) way.

Gamma release resource

Please get the gamma release resource here.

Usage of the gamma release

How to verify Official Demos

You can use the script we provide to start as follows:

   /NNF/release # ./nnf_run.sh
   Usage: ./bin/nnf_nnsample [-m|-s|-a|-h] [-i|-v|-c|option]
   [-m,--model <model>] run a single model
            <model>:yolov5s lightFace age humanAttr ocrDet
                  ocrCls ocrRec yolov8n yolov8nBase yolov8s
                  yolov8sBase vehicle yolov8nCcpd yolov8nClassify yolov8nObb
                  yolov8nPose yolov8nSegment yolov8sDetection yolov8sPose yolov10sDetection
                  rtmdets imageWriter
            example:./bin/nnf_nnsample -m yolov5s
                  ./bin/nnf_nnsample --model humanAttr

   [-s,--sequential <model1,model2,...>] run sequential models
            <models>:yolov5s,humanAttr
                  lightFace,age
                  ocrDet,ocrRec
                  ocrDet,ocrCls,ocrRec
                  yolov8nCcpd,ocrRec
            example:./bin/nnf_nnsample -s yolov5s,humanAttr
                  ./bin/nnf_nnsample --sequential ocrDet,ocrCls,ocrRec

   [-i,--image file] set image file to nn detection.
            <file>: file name or input "." to using inner file.
            example:./bin/nnf_nnsample -s yolov5s,humanAttr -i filename -c testCount
                        ./bin/nnf_nnsample -s yolov5s,humanAttr --image . -c testCount

   [-v,--video file] set video file to nn detection.
            <file>: file name or input "." to using inner file.
            example:./bin/nnf_nnsample -s yolov5s,humanAttr -v filename
                  ./bin/nnf_nnsample -s yolov5s,humanAttr --video .

   [-a,--all] run all model testing

Release folder structure

ecdc6211999e6e7c622248439e95b0c-20240731-020717.png
  • bin: nnf_nnsample. Prebuild sample programs that can run on the c3v Linux platform.

  • image: images used for detection.

  • model: models to be used in the sample program.

  • include:header file of NN framework SDK.

  • lib:libraries of NN framework SDK.

  • souces:example code for using NN framework.

  • video: Some released video files.

  • nnf_run.sh:executable script for running sample code.

How to run NN framework sample

  1. Copy the release foler to C3V Linux.

/NNF/release # ls -alh
drwxr-xr-x    9 10989    11400       4.0K Jul  10  2024 .
drwxr-xr-x   10 10989    11400       4.0K Jul  10  2024 ..
drwxr-xr-x    2 10989    11400       4.0K Jul  10  2024 bin
drwxr-xr-x    2 10989    11400       4.0K Jul  10  2024 image
drwxr-xr-x    5 10989    11400       4.0K Jul  10  2024 include
drwxr-xr-x    5 10989    11400       4.0K Jul  10  2024 lib
drwxr-xr-x    2 10989    11400       4.0K Jul  10  2024 model
-rw-r-xr-x    1 10989    11400        398 Jul  10  2024 nnf_run.sh
drwxr-xr-x    5 10989    11400       4.0K Jul  10  2024 sources
drwxr-xr-x    2 10989    11400       4.0K Jul  10  2024 video
  1. Run nnf_run.sh to run the NNF sample.

a. One-time input

./nnf_run.sh -m yolov8n

/NNF/release # ./nnf_run.sh -m yolov8n
yolov8n in
general(box:    0   19  614  619) --> label: 0000, confidence: 0.87
yolov8n out, retVal: 0

b. Read input from the image file

./nnf_run.sh -m yolov8n -i ./image

/NNF/release # ./nnf_run.sh -m yolov8n -i ./image/person640x640.jpg
yolov8n in
general(box:    0   19  614  619) --> label: 0000, confidence: 0.87
yolov8n out, retVal: 0

c. Read inputs from the video file.

./nnf_run.sh -m yolov8n -v ./video/humanCount.mp4

/NNF/release # ./nnf_run.sh -m yolov8n -v ./video/humanCount.mp4
streaming in
streaming test: runner func quit
general(box:  671  129  307  873) --> label: 0000, confidence: 0.90
general(box:    0  364  267  460) --> label: 0007, confidence: 0.38
general(box:  672  127  325  873) --> label: 0000, confidence: 0.90
general(box:    0  364  268  461) --> label: 0007, confidence: 0.39
general(box:  671  125  327  878) --> label: 0000, confidence: 0.90
general(box:    0  364  268  460) --> label: 0007, confidence: 0.41
general(box:  673  125  323  878) --> label: 0000, confidence: 0.90
general(box:    0  363  267  459) --> label: 0007, confidence: 0.44
general(box:  675  123  319  879) --> label: 0000, confidence: 0.90
general(box:    0  364  268  459) --> label: 0007, confidence: 0.38
general(box:  519   48 1398  983) --> label: 0006, confidence: 0.26
general(box:  677  123  317  879) --> label: 0000, confidence: 0.90
general(box:    0  364  268  459) --> label: 0007, confidence: 0.37
general(box:  678  123  320  882) --> label: 0000, confidence: 0.91
general(box:    0  364  268  459) --> label: 0007, confidence: 0.37
general(box:  678  125  321  882) --> label: 0000, confidence: 0.90
general(box:    0  364  267  457) --> label: 0007, confidence: 0.30
general(box:  677  128  324  877) --> label: 0000, confidence: 0.90
general(box:    0  363  267  455) --> label: 0007, confidence: 0.37
general(box:  677  128  324  878) --> label: 0000, confidence: 0.90
general(box:    0  363  267  456) --> label: 0007, confidence: 0.36
streaming out, retVal: 0

d. Sequential models

./nnf_run.sh -s yolov5s,humanAttr

/NNF/release # ./nnf_run.sh -s yolov5s,humanAttr
sequential in
human attr(box:  311  181  199  606) --> result: Male
Age18-60
Direct: Front
Glasses: True
Bag: No bag
Upper: ShortSleeve UpperStride
Lower: Trousers
human attr(box:  311  181  199  606) --> result: Female
Age18-60
Direct: Back
Glasses: False
Bag: ShoulderBag
Upper: ShortSleeve
Lower: LowerPattern Shorts
sequential out, retVal: 0

e. Model inference results save to image

./nnf_run.sh -s yolov8sPose,imageWriter

/NNF/release # ./nnf_run.sh -s yolov8sPose,imageWriter
sequential in
warning: sequential model list(not tested)
vnn_plot_detected_pose_results:  0  91%, [(852, 142) - (1169, 753)], person
vnn_plot_detected_pose_results:  0  89%, [(1689, 187) - (1835, 642)], person
vnn_plot_detected_pose_results:  0  89%, [(61, 123) - (232, 601)], person
vnn_plot_detected_pose_results:  0  88%, [(1337, 330) - (1441, 679)], person
vnn_plot_detected_pose_results:  0  87%, [(369, 252) - (480, 671)], person
write an image: detected_1883_931_154926538.jpg
sequential out, retVal: 0

results will save to the image detected_1883_931_154926538.jpg.

How to build NN framework

  1. Cross-compile for C3V environment.

a. Please use nnf_build.sh for NN framework compiling.

b. All the resouce will installed to release folder.

  1. Copy release folder to the C3V platform.

  2. Setup environment variable.

a. Setting environment variables independently.

export LD_LIBRARY_PATH=${PWD}/lib:${PWD}/lib/opencv:${PWD}/lib/pytorch:${LD_LIBRARY_PATH}

b. Run nnf_run.sh will auto set environment variables.

  1. Then, you can run nnf_run.sh for NNF sample.

Models of the gamma release

  • No labels