Update README.md
Browse files
README.md
CHANGED
|
@@ -1,6 +1,112 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: other
|
| 3 |
-
license_name: sla0044
|
| 4 |
-
license_link: >-
|
| 5 |
-
https://github.com/STMicroelectronics/stm32ai-modelzoo/raw/refs/heads/main/pose_estimation/LICENSE.md
|
| 6 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: other
|
| 3 |
+
license_name: sla0044
|
| 4 |
+
license_link: >-
|
| 5 |
+
https://github.com/STMicroelectronics/stm32ai-modelzoo/raw/refs/heads/main/pose_estimation/LICENSE.md
|
| 6 |
+
pipeline_tag: keypoint-detection
|
| 7 |
+
---
|
| 8 |
+
# Yolov11n_pose quantized
|
| 9 |
+
|
| 10 |
+
## **Use case** : `Pose estimation`
|
| 11 |
+
|
| 12 |
+
# Model description
|
| 13 |
+
|
| 14 |
+
Yolov11n_pose is a lightweight and efficient model designed for multi pose estimation tasks. It is part of the YOLO (You Only Look Once) family of models, known for their real-time object detection capabilities. The "n" in Yolov11n_pose indicates that it is a nano version, optimized for speed and resource efficiency, making it suitable for deployment on devices with limited computational power, such as mobile devices and embedded systems.
|
| 15 |
+
|
| 16 |
+
Yolov11n_pose is implemented in Pytorch by Ultralytics and is quantized in int8 format using tensorflow lite converter.
|
| 17 |
+
|
| 18 |
+
## Network information
|
| 19 |
+
|
| 20 |
+
|
| 21 |
+
| Network information | Value |
|
| 22 |
+
|-------------------------|-----------------|
|
| 23 |
+
| Framework | TensorFlow Lite |
|
| 24 |
+
| Quantization | int8 |
|
| 25 |
+
| Provenance | https://docs.ultralytics.com/tasks/pose/ |
|
| 26 |
+
|
| 27 |
+
|
| 28 |
+
## Networks inputs / outputs
|
| 29 |
+
|
| 30 |
+
With an image resolution of NxM with K keypoints to detect :
|
| 31 |
+
|
| 32 |
+
| Input Shape | Description |
|
| 33 |
+
| ----- | ----------- |
|
| 34 |
+
| (1, N, M, 3) | Single NxM RGB image with UINT8 values between 0 and 255 |
|
| 35 |
+
|
| 36 |
+
| Output Shape | Description |
|
| 37 |
+
| ----- | ----------- |
|
| 38 |
+
| (1, Kx3, F) | FLOAT values Where F = (N/8)^2 + (N/16)^2 + (N/32)^2 is the 3 concatenated feature maps and K is the number of keypoints|
|
| 39 |
+
|
| 40 |
+
|
| 41 |
+
## Recommended Platforms
|
| 42 |
+
|
| 43 |
+
|
| 44 |
+
| Platform | Supported | Recommended |
|
| 45 |
+
|----------|-----------|-------------|
|
| 46 |
+
| STM32L0 | [] | [] |
|
| 47 |
+
| STM32L4 | [] | [] |
|
| 48 |
+
| STM32U5 | [] | [] |
|
| 49 |
+
| STM32H7 | [] | [] |
|
| 50 |
+
| STM32MP1 | [] | [] |
|
| 51 |
+
| STM32MP2 | [] | [] |
|
| 52 |
+
| STM32N6 | [x] | [x] |
|
| 53 |
+
|
| 54 |
+
|
| 55 |
+
# Performances
|
| 56 |
+
|
| 57 |
+
## Metrics
|
| 58 |
+
|
| 59 |
+
Measures are done with default STM32Cube.AI configuration with enabled input / output allocated option.
|
| 60 |
+
> [!CAUTION]
|
| 61 |
+
> All YOLOv11 hyperlinks in the tables below link to an external GitHub folder, which is subject to its own license terms:
|
| 62 |
+
https://github.com/stm32-hotspot/ultralytics/blob/main/LICENSE
|
| 63 |
+
Please also check the folder's README.md file for detailed information about its use and content:
|
| 64 |
+
https://github.com/stm32-hotspot/ultralytics/blob/main/examples/YOLOv8-STEdgeAI/README.md
|
| 65 |
+
|
| 66 |
+
### Reference **NPU** memory footprint based on COCO Person dataset (see Accuracy for details on dataset)
|
| 67 |
+
|Model | Dataset | Format | Resolution | Series | Internal RAM (KiB) | External RAM (KiB) | Weights Flash (KiB)| STM32Cube.AI version | STEdgeAI Core version |
|
| 68 |
+
|----------|------------------|--------|-------------|------------------|------------------|---------------------|-------|----------------------|-------------------------|
|
| 69 |
+
| [YOLOv8n pose per channel](https://github.com/stm32-hotspot/ultralytics/blob/main/examples/YOLOv8-STEdgeAI/stedgeai_models/pose_estimation/yolo11/yolo11n_256_quant_pc_uf_pose_coco-st.tflite) | COCO-Person | Int8 | 256x256x3 | STM32N6 | 742.95 | 0.0 | 3543.04 | 10.2.0 | 2.2.0 |
|
| 70 |
+
|
| 71 |
+
|
| 72 |
+
### Reference **NPU** inference time based on COCO Person dataset (see Accuracy for details on dataset)
|
| 73 |
+
| Model | Dataset | Format | Resolution | Board | Execution Engine | Inference time (ms) | Inf / sec | STM32Cube.AI version | STEdgeAI Core version |
|
| 74 |
+
|--------|------------------|--------|-------------|------------------|------------------|---------------------|-------|----------------------|-------------------------|
|
| 75 |
+
| [YOLOv8n pose per channel](https://github.com/stm32-hotspot/ultralytics/blob/main/examples/YOLOv8-STEdgeAI/stedgeai_models/pose_estimation/yolo11/yolo11n_256_quant_pc_uf_pose_coco-st.tflite) | COCO-Person | Int8 | 256x256x3 | STM32N6570-DK | NPU/MCU | 37.39 | 26.74 | 10.2.0 | 2.2.0 |
|
| 76 |
+
|
| 77 |
+
|
| 78 |
+
|
| 79 |
+
## Integration in a simple example and other services support:
|
| 80 |
+
|
| 81 |
+
Please refer to the stm32ai-modelzoo-services GitHub [here](https://github.com/STMicroelectronics/stm32ai-modelzoo-services).
|
| 82 |
+
The models are stored in the Ultralytics repository. You can find them at the following link: [Ultralytics YOLOv8-STEdgeAI Models](https://github.com/stm32-hotspot/ultralytics/blob/main/examples/YOLOv8-STEdgeAI/stedgeai_models/).
|
| 83 |
+
|
| 84 |
+
Please refer to the [Ultralytics documentation](https://docs.ultralytics.com/tasks/pose/#train) to retrain the models.
|
| 85 |
+
|
| 86 |
+
|
| 87 |
+
# References
|
| 88 |
+
|
| 89 |
+
<a id="1">[1]</a>
|
| 90 |
+
“Microsoft COCO: Common Objects in Context”. [Online]. Available: https://cocodataset.org/#download.
|
| 91 |
+
@article{DBLP:journals/corr/LinMBHPRDZ14,
|
| 92 |
+
author = {Tsung{-}Yi Lin and
|
| 93 |
+
Michael Maire and
|
| 94 |
+
Serge J. Belongie and
|
| 95 |
+
Lubomir D. Bourdev and
|
| 96 |
+
Ross B. Girshick and
|
| 97 |
+
James Hays and
|
| 98 |
+
Pietro Perona and
|
| 99 |
+
Deva Ramanan and
|
| 100 |
+
Piotr Doll{'{a} }r and
|
| 101 |
+
C. Lawrence Zitnick},
|
| 102 |
+
title = {Microsoft {COCO:} Common Objects in Context},
|
| 103 |
+
journal = {CoRR},
|
| 104 |
+
volume = {abs/1405.0312},
|
| 105 |
+
year = {2014},
|
| 106 |
+
url = {http://arxiv.org/abs/1405.0312},
|
| 107 |
+
archivePrefix = {arXiv},
|
| 108 |
+
eprint = {1405.0312},
|
| 109 |
+
timestamp = {Mon, 13 Aug 2018 16:48:13 +0200},
|
| 110 |
+
biburl = {https://dblp.org/rec/bib/journals/corr/LinMBHPRDZ14},
|
| 111 |
+
bibsource = {dblp computer science bibliography, https://dblp.org}
|
| 112 |
+
}
|