qaihm-bot commited on
Commit
386c216
·
verified ·
1 Parent(s): afdfd83

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +155 -0
README.md ADDED
@@ -0,0 +1,155 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: pytorch
3
+ license: apache-2.0
4
+ pipeline_tag: object-detection
5
+ tags:
6
+ - real_time
7
+ - quantized
8
+ - android
9
+
10
+ ---
11
+
12
+ ![](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/mediapipe_face_quantized/web-assets/model_demo.png)
13
+
14
+ # MediaPipe-Face-Detection-Quantized: Optimized for Mobile Deployment
15
+ ## Detect faces and locate facial features in real-time video and image streams
16
+
17
+ Designed for sub-millisecond processing, this model predicts bounding boxes and pose skeletons (left eye, right eye, nose tip, mouth, left eye tragion, and right eye tragion) of faces in an image.
18
+
19
+ This model is an implementation of MediaPipe-Face-Detection-Quantized found [here](https://github.com/zmurez/MediaPipePyTorch/).
20
+ This repository provides scripts to run MediaPipe-Face-Detection-Quantized on Qualcomm® devices.
21
+ More details on model performance across various devices, can be found
22
+ [here](https://aihub.qualcomm.com/models/mediapipe_face_quantized).
23
+
24
+
25
+ ### Model Details
26
+
27
+ - **Model Type:** Object detection
28
+ - **Model Stats:**
29
+ - Input resolution: 256x256
30
+ - Number of output classes: 6
31
+ - Number of parameters (MediaPipeFaceDetector): 135K
32
+ - Model size (MediaPipeFaceDetector): 255 KB
33
+ - Number of parameters (MediaPipeFaceLandmarkDetector): 603K
34
+ - Model size (MediaPipeFaceLandmarkDetector): 746 KB
35
+
36
+
37
+
38
+
39
+ | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
40
+ | ---|---|---|---|---|---|---|---|
41
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 0.25 ms | 0 - 1 MB | FP16 | NPU | [MediaPipeFaceDetector.tflite](https://huggingface.co/qualcomm/MediaPipe-Face-Detection-Quantized/blob/main/MediaPipeFaceDetector.tflite)
42
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 0.153 ms | 0 - 36 MB | FP16 | NPU | [MediaPipeFaceLandmarkDetector.tflite](https://huggingface.co/qualcomm/MediaPipe-Face-Detection-Quantized/blob/main/MediaPipeFaceLandmarkDetector.tflite)
43
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 0.295 ms | 0 - 45 MB | FP16 | NPU | [MediaPipeFaceDetector.so](https://huggingface.co/qualcomm/MediaPipe-Face-Detection-Quantized/blob/main/MediaPipeFaceDetector.so)
44
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 0.208 ms | 0 - 10 MB | FP16 | NPU | [MediaPipeFaceLandmarkDetector.so](https://huggingface.co/qualcomm/MediaPipe-Face-Detection-Quantized/blob/main/MediaPipeFaceLandmarkDetector.so)
45
+
46
+
47
+
48
+ ## Installation
49
+
50
+ This model can be installed as a Python package via pip.
51
+
52
+ ```bash
53
+ pip install qai-hub-models
54
+ ```
55
+
56
+
57
+ ## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
58
+
59
+ Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
60
+ Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
61
+
62
+ With this API token, you can configure your client to run models on the cloud
63
+ hosted devices.
64
+ ```bash
65
+ qai-hub configure --api_token API_TOKEN
66
+ ```
67
+ Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information.
68
+
69
+
70
+
71
+ ## Demo off target
72
+
73
+ The package contains a simple end-to-end demo that downloads pre-trained
74
+ weights and runs this model on a sample input.
75
+
76
+ ```bash
77
+ python -m qai_hub_models.models.mediapipe_face_quantized.demo
78
+ ```
79
+
80
+ The above demo runs a reference implementation of pre-processing, model
81
+ inference, and post processing.
82
+
83
+ **NOTE**: If you want running in a Jupyter Notebook or Google Colab like
84
+ environment, please add the following to your cell (instead of the above).
85
+ ```
86
+ %run -m qai_hub_models.models.mediapipe_face_quantized.demo
87
+ ```
88
+
89
+
90
+ ### Run model on a cloud-hosted device
91
+
92
+ In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
93
+ device. This script does the following:
94
+ * Performance check on-device on a cloud-hosted device
95
+ * Downloads compiled assets that can be deployed on-device for Android.
96
+ * Accuracy check between PyTorch and on-device outputs.
97
+
98
+ ```bash
99
+ python -m qai_hub_models.models.mediapipe_face_quantized.export
100
+ ```
101
+
102
+ ```
103
+ Profile Job summary of MediaPipeFaceDetector
104
+ --------------------------------------------------
105
+ Device: Snapdragon X Elite CRD (11)
106
+ Estimated Inference Time: 0.41 ms
107
+ Estimated Peak Memory Range: 0.53-0.53 MB
108
+ Compute Units: NPU (118) | Total (118)
109
+
110
+ Profile Job summary of MediaPipeFaceLandmarkDetector
111
+ --------------------------------------------------
112
+ Device: Snapdragon X Elite CRD (11)
113
+ Estimated Inference Time: 0.38 ms
114
+ Estimated Peak Memory Range: 0.60-0.60 MB
115
+ Compute Units: NPU (112) | Total (112)
116
+
117
+
118
+ ```
119
+
120
+
121
+
122
+
123
+
124
+ ## Deploying compiled model to Android
125
+
126
+
127
+ The models can be deployed using multiple runtimes:
128
+ - TensorFlow Lite (`.tflite` export): [This
129
+ tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
130
+ guide to deploy the .tflite model in an Android application.
131
+
132
+
133
+ - QNN (`.so` export ): This [sample
134
+ app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
135
+ provides instructions on how to use the `.so` shared library in an Android application.
136
+
137
+
138
+ ## View on Qualcomm® AI Hub
139
+ Get more details on MediaPipe-Face-Detection-Quantized's performance across various devices [here](https://aihub.qualcomm.com/models/mediapipe_face_quantized).
140
+ Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
141
+
142
+ ## License
143
+ - The license for the original implementation of MediaPipe-Face-Detection-Quantized can be found
144
+ [here](https://github.com/zmurez/MediaPipePyTorch/blob/master/LICENSE).
145
+ - The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
146
+
147
+ ## References
148
+ * [BlazeFace: Sub-millisecond Neural Face Detection on Mobile GPUs](https://arxiv.org/abs/1907.05047)
149
+ * [Source Model Implementation](https://github.com/zmurez/MediaPipePyTorch/)
150
+
151
+ ## Community
152
+ * Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
153
+ * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com).
154
+
155
+