Object Detection

ST YOLO LC V1 quantized

Use case : Object detection

Model description

ST Yolo LC v1 is a real-time object detection model targeted for real-time processing implemented in Tensorflow.

The model is quantized in int8 format using tensorflow lite converter.

Network information

Network information Value
Framework TensorFlow Lite
Quantization int8
Paper https://pjreddie.com/media/files/papers/YOLO9000.pdf

The models are quantized using tensorflow lite converter.

Network inputs / outputs

For an image resolution of NxM and NC classes

Input Shape Description
(1, W, H, 3) Single NxM RGB image with UINT8 values between 0 and 255
Output Shape Description
(1, WxH, NAx(5+NC)) FLOAT values Where WXH is the resolution of the output grid cell, NA is the number of anchors and NC is the number of classes

Recommended Platforms

Platform Supported Recommended
STM32L0 [] []
STM32L4 [] []
STM32U5 [] []
STM32H7 [x] [x]
STM32MP1 [x] [x]
STM32MP2 [x] []
STM32N6 [x] []

Performances

Metrics

Measures are done with default STM32Cube.AI configuration with enabled input / output allocated option.

Reference NPU memory footprint based on COCO Person dataset (see Accuracy for details on dataset)

Model Dataset Format Resolution Series Internal RAM (KiB) External RAM (KiB) Weights Flash (KiB) STM32Cube.AI version STEdgeAI Core version
st_yolo_lc_v1 COCO-Person Int8 192x192x3 STM32N6 252 0 316.69 10.2.0 2.2.0
st_yolo_lc_v1 COCO-Person Int8 224x224x3 STM32N6 343 0 316.69 10.2.0 2.2.0
st_yolo_lc_v1 COCO-Person Int8 256x256x3 STM32N6 576 0 316.69 10.2.0 2.2.0

Reference NPU inference time based on COCO Person dataset (see Accuracy for details on dataset)

Model Dataset Format Resolution Board Execution Engine Inference time (ms) Inf / sec STM32Cube.AI version STEdgeAI Core version
st_yolo_lc_v1 COCO-Person Int8 192x192x3 STM32N6570-DK NPU/MCU 1.96 510.2 10.2.0 2.2.0
st_yolo_lc_v1 COCO-Person Int8 224x224x3 STM32N6570-DK NPU/MCU 2.36 423.73 10.2.0 2.2.0
st_yolo_lc_v1 COCO-Person Int8 256x256x3 STM32N6570-DK NPU/MCU 3.02 331.13 10.2.0 2.2.0

Reference MCU memory footprint based on COCO Person dataset (see Accuracy for details on dataset)

Model Format Resolution Series Activation RAM Runtime RAM Weights Flash Code Flash Total RAM Total Flash STM32Cube.AI version
st_yolo_lc_v1 Int8 192x192x3 STM32H7 166.29 8.09 276.73 52.81 174.38 329.54 10.2.0
st_yolo_lc_v1 Int8 224x224x3 STM32H7 217.29 8.09 276.73 52.82 225.38 329.55 10.2.0
st_yolo_lc_v1 Int8 256x256x3 STM32H7 278.29 8.09 276.73 52.81 286.38 329.54 10.2.0

Reference MCU inference time based on COCO Person dataset (see Accuracy for details on dataset)

Model Format Resolution Board Execution Engine Frequency Inference time (ms) STM32Cube.AI version
st_yolo_lc_v1 Int8 192x192x3 STM32H747I-DISCO 1 CPU 400 MHz 179.36 10.2.0
st_yolo_lc_v1 Int8 224x224x3 STM32H747I-DISCO 1 CPU 400 MHz 244.75 10.2.0
st_yolo_lc_v1 Int8 256x256x3 STM32H747I-DISCO 1 CPU 400 MHz 320.79 10.2.0

Reference MPU inference time based on COCO Person dataset (see Accuracy for details on dataset)

Model Format Resolution Quantization Board Execution Engine Frequency Inference time (ms) %NPU %GPU %CPU X-LINUX-AI version Framework
st_yolo_lc_v1 Int8 192x192x3 per-channel** STM32MP257F-DK2 NPU/GPU 800 MHz 11.88 ms 2.62 97.38 0 v6.1.0 OpenVX
st_yolo_lc_v1 Int8 224x224x3 per-channel** STM32MP257F-DK2 NPU/GPU 800 MHz 17.60 ms 3.33 96.67 0 v6.1.0 OpenVX
st_yolo_lc_v1 Int8 256x256x3 per-channel** STM32MP257F-DK2 NPU/GPU 800 MHz 13.93 ms 5.12 94.88 0 v6.1.0 OpenVX
st_yolo_lc_v1 Int8 192x192x3 per-channel STM32MP157F-DK2 2 CPU 800 MHz 33.38 ms NA NA 100 v6.1.0 TensorFlowLite 2.18.0
st_yolo_lc_v1 Int8 224x224x3 per-channel STM32MP157F-DK2 2 CPU 800 MHz 45.43 ms NA NA 100 v6.1.0 TensorFlowLite 2.18.0
st_yolo_lc_v1 Int8 256x256x3 per-channel STM32MP157F-DK2 2 CPU 800 MHz 58.80 ms NA NA 100 v6.1.0 TensorFlowLite 2.18.0
st_yolo_lc_v1 Int8 192x192x3 per-channel STM32MP135F-DK2 1 CPU 1000 MHz 52.63 ms NA NA 100 v6.1.0 TensorFlowLite 2.18.0
st_yolo_lc_v1 Int8 224x224x3 per-channel STM32MP135F-DK2 1 CPU 1000 MHz 72.51 ms NA NA 100 v6.1.0 TensorFlowLite 2.18.0
st_yolo_lc_v1 Int8 256x256x3 per-channel STM32MP135F-DK2 1 CPU 1000 MHz 95.84 ms NA NA 100 v6.1.0 TensorFlowLite 2.18.0

** To get the most out of MP25 NPU hardware acceleration, please use per-tensor quantization

AP on COCO Person dataset

Dataset details: link , License CC BY 4.0 , Quotation[1] , Number of classes: 80, Number of images: 118,287

Model Format Resolution AP
st_yolo_lc_v1 Int8 192x192x3 30.7 %
st_yolo_lc_v1 Float 192x192x3 31.2 %
st_yolo_lc_v1 Int8 224x224x3 34.2 %
st_yolo_lc_v1 Float 224x224x3 33.8 %
st_yolo_lc_v1 Int8 256x256x3 35.6 %
st_yolo_lc_v1 Float 256x256x3 36.4 %

* EVAL_IOU = 0.5, NMS_THRESH = 0.5, SCORE_THRESH = 0.001, MAX_DETECTIONS = 100

Retraining and Integration in a simple example:

Please refer to the stm32ai-modelzoo-services GitHub here

References

[1] “Microsoft COCO: Common Objects in Context”. [Online]. Available: https://cocodataset.org/#download. @article{DBLP:journals/corr/LinMBHPRDZ14, author = {Tsung{-}Yi Lin and Michael Maire and Serge J. Belongie and Lubomir D. Bourdev and Ross B. Girshick and James Hays and Pietro Perona and Deva Ramanan and Piotr Doll{'{a} }r and C. Lawrence Zitnick}, title = {Microsoft {COCO:} Common Objects in Context}, journal = {CoRR}, volume = {abs/1405.0312}, year = {2014}, url = {http://arxiv.org/abs/1405.0312}, archivePrefix = {arXiv}, eprint = {1405.0312}, timestamp = {Mon, 13 Aug 2018 16:48:13 +0200}, biburl = {https://dblp.org/rec/bib/journals/corr/LinMBHPRDZ14}, bibsource = {dblp computer science bibliography, https://dblp.org} }

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support