qaihm-bot commited on
Commit
89e3404
1 Parent(s): b80255b

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +15 -9
README.md CHANGED
@@ -32,12 +32,15 @@ More details on model performance across various devices, can be found
32
  - Model size (MediaPipePoseLandmarkDetector): 12.9 MB
33
 
34
 
 
 
35
  | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
36
  | ---|---|---|---|---|---|---|---|
37
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 0.824 ms | 0 - 2 MB | FP16 | NPU | [MediaPipePoseDetector.tflite](https://huggingface.co/qualcomm/MediaPipe-Pose-Estimation/blob/main/MediaPipePoseDetector.tflite)
38
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 1.281 ms | 0 - 2 MB | FP16 | NPU | [MediaPipePoseLandmarkDetector.tflite](https://huggingface.co/qualcomm/MediaPipe-Pose-Estimation/blob/main/MediaPipePoseLandmarkDetector.tflite)
39
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 0.876 ms | 0 - 5 MB | FP16 | NPU | [MediaPipePoseDetector.so](https://huggingface.co/qualcomm/MediaPipe-Pose-Estimation/blob/main/MediaPipePoseDetector.so)
40
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 1.294 ms | 0 - 13 MB | FP16 | NPU | [MediaPipePoseLandmarkDetector.so](https://huggingface.co/qualcomm/MediaPipe-Pose-Estimation/blob/main/MediaPipePoseLandmarkDetector.so)
 
41
 
42
 
43
  ## Installation
@@ -98,22 +101,24 @@ python -m qai_hub_models.models.mediapipe_pose.export
98
  Profile Job summary of MediaPipePoseDetector
99
  --------------------------------------------------
100
  Device: Snapdragon X Elite CRD (11)
101
- Estimated Inference Time: 1.05 ms
102
- Estimated Peak Memory Range: 0.48-0.48 MB
103
  Compute Units: NPU (139) | Total (139)
104
 
105
  Profile Job summary of MediaPipePoseLandmarkDetector
106
  --------------------------------------------------
107
  Device: Snapdragon X Elite CRD (11)
108
- Estimated Inference Time: 1.49 ms
109
  Estimated Peak Memory Range: 0.75-0.75 MB
110
  Compute Units: NPU (305) | Total (305)
111
 
112
 
113
  ```
 
 
114
  ## How does this work?
115
 
116
- This [export script](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/MediaPipe-Pose-Estimation/export.py)
117
  leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
118
  on-device. Lets go through each step below in detail:
119
 
@@ -191,6 +196,7 @@ AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
191
 
192
 
193
 
 
194
  ## Deploying compiled model to Android
195
 
196
 
@@ -212,7 +218,7 @@ Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
212
  ## License
213
  - The license for the original implementation of MediaPipe-Pose-Estimation can be found
214
  [here](https://github.com/zmurez/MediaPipePyTorch/blob/master/LICENSE).
215
- - The license for the compiled assets for on-device deployment can be found [here]({deploy_license_url})
216
 
217
  ## References
218
  * [BlazePose: On-device Real-time Body Pose tracking](https://arxiv.org/abs/2006.10204)
 
32
  - Model size (MediaPipePoseLandmarkDetector): 12.9 MB
33
 
34
 
35
+
36
+
37
  | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
38
  | ---|---|---|---|---|---|---|---|
39
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 0.85 ms | 0 - 2 MB | FP16 | NPU | [MediaPipePoseDetector.tflite](https://huggingface.co/qualcomm/MediaPipe-Pose-Estimation/blob/main/MediaPipePoseDetector.tflite)
40
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 1.205 ms | 0 - 2 MB | FP16 | NPU | [MediaPipePoseLandmarkDetector.tflite](https://huggingface.co/qualcomm/MediaPipe-Pose-Estimation/blob/main/MediaPipePoseLandmarkDetector.tflite)
41
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 0.88 ms | 2 - 7 MB | FP16 | NPU | [MediaPipePoseDetector.so](https://huggingface.co/qualcomm/MediaPipe-Pose-Estimation/blob/main/MediaPipePoseDetector.so)
42
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 1.306 ms | 0 - 13 MB | FP16 | NPU | [MediaPipePoseLandmarkDetector.so](https://huggingface.co/qualcomm/MediaPipe-Pose-Estimation/blob/main/MediaPipePoseLandmarkDetector.so)
43
+
44
 
45
 
46
  ## Installation
 
101
  Profile Job summary of MediaPipePoseDetector
102
  --------------------------------------------------
103
  Device: Snapdragon X Elite CRD (11)
104
+ Estimated Inference Time: 1.09 ms
105
+ Estimated Peak Memory Range: 1.68-1.68 MB
106
  Compute Units: NPU (139) | Total (139)
107
 
108
  Profile Job summary of MediaPipePoseLandmarkDetector
109
  --------------------------------------------------
110
  Device: Snapdragon X Elite CRD (11)
111
+ Estimated Inference Time: 1.46 ms
112
  Estimated Peak Memory Range: 0.75-0.75 MB
113
  Compute Units: NPU (305) | Total (305)
114
 
115
 
116
  ```
117
+
118
+
119
  ## How does this work?
120
 
121
+ This [export script](https://aihub.qualcomm.com/models/mediapipe_pose/qai_hub_models/models/MediaPipe-Pose-Estimation/export.py)
122
  leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
123
  on-device. Lets go through each step below in detail:
124
 
 
196
 
197
 
198
 
199
+
200
  ## Deploying compiled model to Android
201
 
202
 
 
218
  ## License
219
  - The license for the original implementation of MediaPipe-Pose-Estimation can be found
220
  [here](https://github.com/zmurez/MediaPipePyTorch/blob/master/LICENSE).
221
+ - The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
222
 
223
  ## References
224
  * [BlazePose: On-device Real-time Body Pose tracking](https://arxiv.org/abs/2006.10204)