Detect Model
Pre-process | Inference | Post-process | NB Size | |
yolov8nu8 | 3ms | 25ms | 19ms | 6.93MB |
yolov8n16 | 3ms | 50ms | 29ms | 8.57MB |
yolov8su8 | 3ms | 47ms | 20ms | 9.94MB |
yolov8s16 | 3ms | 112ms | 34ms | 20.0MB |
yolov8mu8 | 3ms | 75ms | 21ms | 19.4MB |
yolov8m16 | 3ms | 232ms | 35ms | 41.3MB |
yolov8lu8 | 3ms | 142ms | 25ms | 33.2MB |
yolov8l16 | 3ms | 390ms | 35ms | 66.4MB |
yolov8nu8 means that yolov8n model convert to uint8 format with VSI ACUITY toolkit.
yolov8n16 means that yolov8n model convert to int16 format with VSI ACUITY toolkit.
All experimental data were measured in the C3V q654 environment.
NN test with local file
Read a BGR file(640*640) which was on the rootfs of c3v q654.
yolov8 nano uint8 detect model.
Just one thread for reading frame of the file and feeding to NN module.
Record the duration that between the time1 which is feeding the 1st frame and the time2 which is just outputing the 1000th's NN post result.
Average time = duration / 1000 = 30.293ms .
So, maybe we can run 30fps video for NN runtime by yolov8 nano uint8 detect model.
Pose Model
Pre-process | Inference | Post-process | NB Size | |
yolov8nu8-pose | 3ms | 26ms | 17ms | 7.02MB |
yolov8n16-pose | 3ms | 52ms | 19ms | 9.00MB |
yolov8su8-pose | 3ms | 47ms | 15ms | 10.1MB |
yolov8s16-pose | 3ms | 116ms | 19ms | 20.9MB |
yolov8nu8-pose means that yolov8n pose model convert to uint8 format with VSI ACUITY toolkit.
yolov8n16-pose means that yolov8n pose model convert to int16 format with VSI ACUITY toolkit.
All experimental data were measured in the C3V q654 environment.
Segment Model
Pre-process | Inference | Post-process | NB Size | |
yolov8nu8-seg | 3ms | 30ms | 26ms | 7.67MB |
yolov8n16-seg | 3ms | 60ms | 36ms | 9.69MB |
yolov8su8-seg | 3ms | 59ms | 26ms | 10.8MB |
yolov8s16-seg | 3ms | 138ms | 37ms | 21.7MB |
yolov8nu8-seg means that yolov8n segment model convert to uint8 format with VSI ACUITY toolkit.
yolov8n16-seg means that yolov8n segment model convert to int16 format with VSI ACUITY toolkit.
All experimental data were measured in the C3V q654 environment.
Classify Model
Pre-process | Inference | Post-process | NB Size | |
yolov8nu8-cls | 0ms | 4ms | 0ms | 2.1MB |
yolov8n16-cls | 0ms | 5ms | 0ms | 4.51MB |
yolov8su8-cls | 0ms | 5ms | 0ms | 4.49MB |
yolov8s16-cls | 0ms | 9ms | 0ms | 10.1MB |
yolov8x16-cls | 0ms | 46ms | 0ms | 86.2MB |
yolov8nu8-cls means that yolov8n classify model convert to uint8 format with VSI ACUITY toolkit.
yolov8n16-cls means that yolov8n classify model convert to int16 format with VSI ACUITY toolkit.
All experimental data were measured in the C3V q654 environment.
yolov8x16-cls VS yolov8n16-cls VS yolov8u8-cls
yolov8x16-cls means that yolov8x classify model convert to int16 format with VSI ACUITY toolkit.
yolov8n16-cls means that yolov8n classify model convert to int16 format with VSI ACUITY toolkit.
yolov8nu8-cls means that yolov8n classify model convert to uint8 format with VSI ACUITY toolkit.
All experimental data were measured in the C3V q654 environment.
Based on the comprehensive analysis of the detection results and performance data of YOLOv8 classify nano uint8, YOLOv8 classify nano int16, and YOLOv8 classify extra int16, we believe that the recognition speed of YOLOv8n16-cls is significantly ahead of YOLOv8x16-cls, and the recognition accuracy is slightly inferior to YOLOv8x16-cls,but the NB size is much smaller than YOLOv8x16-cls.
Tests have shown that the reliability of test results for YOLOv8nu8-cls, YOLOv8su8-cls, and even YOLOv8xu8-cls is very low; According to theoretical speculation, the reliability of the test results is significantly reduced due to the fact that the data accuracy level is only 256 orders, while the official model has 1000 classes. Therefore, we recommend using int16 NB instead of uint8 NB.
Based on the official parameters of the integrated model and our measurement data on the C3V platform, we recommend using YOLOv8n16-cls.
OBB Model
Pre-process | Inference | Post-process | NB Size | |
yolov8nu8-obb | 7ms | 74ms | 14ms | 6.44MB |
yolov8n16-obb | 9ms | 148ms | 18ms | 13.7MB |
yolov8su8-obb | 7ms | 119ms | 14ms | 18.8MB |
yolov8s16-obb | 9ms | 374ms | 18ms | 23.5MB |
yolov8nu8-obb means that yolov8n OBB model convert to uint8 format with VSI ACUITY toolkit.
yolov8n16-obb means that yolov8n OBB model convert to int16 format with VSI ACUITY toolkit.
All experimental data were measured in the C3V q654 environment.