|
--- |
|
library_name: pytorch |
|
license: apache-2.0 |
|
pipeline_tag: image-classification |
|
tags: |
|
- real_time |
|
- android |
|
|
|
--- |
|
|
|
![](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/mediapipe_pose/web-assets/model_demo.png) |
|
|
|
# MediaPipe-Pose-Estimation: Optimized for Mobile Deployment |
|
## Detect and track human body poses in real-time images and video streams |
|
|
|
The MediaPipe Pose Landmark Detector is a machine learning pipeline that predicts bounding boxes and pose skeletons of poses in an image. |
|
|
|
This model is an implementation of MediaPipe-Pose-Estimation found [here](https://github.com/zmurez/MediaPipePyTorch/). |
|
This repository provides scripts to run MediaPipe-Pose-Estimation on Qualcomm® devices. |
|
More details on model performance across various devices, can be found |
|
[here](https://aihub.qualcomm.com/models/mediapipe_pose). |
|
|
|
|
|
### Model Details |
|
|
|
- **Model Type:** Pose estimation |
|
- **Model Stats:** |
|
- Input resolution: 256x256 |
|
- Number of parameters (MediaPipePoseDetector): 815K |
|
- Model size (MediaPipePoseDetector): 3.14 MB |
|
- Number of parameters (MediaPipePoseLandmarkDetector): 3.37M |
|
- Model size (MediaPipePoseLandmarkDetector): 12.9 MB |
|
|
|
|
|
|
|
|
|
| Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model |
|
| ---|---|---|---|---|---|---|---| |
|
| Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 0.793 ms | 0 - 14 MB | FP16 | NPU | [MediaPipePoseDetector.tflite](https://huggingface.co/qualcomm/MediaPipe-Pose-Estimation/blob/main/MediaPipePoseDetector.tflite) |
|
| Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 0.839 ms | 0 - 174 MB | FP16 | NPU | [MediaPipePoseLandmarkDetector.tflite](https://huggingface.co/qualcomm/MediaPipe-Pose-Estimation/blob/main/MediaPipePoseLandmarkDetector.tflite) |
|
| Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 0.851 ms | 0 - 102 MB | FP16 | NPU | [MediaPipePoseDetector.so](https://huggingface.co/qualcomm/MediaPipe-Pose-Estimation/blob/main/MediaPipePoseDetector.so) |
|
| Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 0.906 ms | 0 - 9 MB | FP16 | NPU | [MediaPipePoseLandmarkDetector.so](https://huggingface.co/qualcomm/MediaPipe-Pose-Estimation/blob/main/MediaPipePoseLandmarkDetector.so) |
|
|
|
|
|
|
|
## Installation |
|
|
|
This model can be installed as a Python package via pip. |
|
|
|
```bash |
|
pip install qai-hub-models |
|
``` |
|
|
|
|
|
## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device |
|
|
|
Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your |
|
Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`. |
|
|
|
With this API token, you can configure your client to run models on the cloud |
|
hosted devices. |
|
```bash |
|
qai-hub configure --api_token API_TOKEN |
|
``` |
|
Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information. |
|
|
|
|
|
|
|
## Demo off target |
|
|
|
The package contains a simple end-to-end demo that downloads pre-trained |
|
weights and runs this model on a sample input. |
|
|
|
```bash |
|
python -m qai_hub_models.models.mediapipe_pose.demo |
|
``` |
|
|
|
The above demo runs a reference implementation of pre-processing, model |
|
inference, and post processing. |
|
|
|
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like |
|
environment, please add the following to your cell (instead of the above). |
|
``` |
|
%run -m qai_hub_models.models.mediapipe_pose.demo |
|
``` |
|
|
|
|
|
### Run model on a cloud-hosted device |
|
|
|
In addition to the demo, you can also run the model on a cloud-hosted Qualcomm® |
|
device. This script does the following: |
|
* Performance check on-device on a cloud-hosted device |
|
* Downloads compiled assets that can be deployed on-device for Android. |
|
* Accuracy check between PyTorch and on-device outputs. |
|
|
|
```bash |
|
python -m qai_hub_models.models.mediapipe_pose.export |
|
``` |
|
|
|
``` |
|
Profile Job summary of MediaPipePoseDetector |
|
-------------------------------------------------- |
|
Device: Snapdragon X Elite CRD (11) |
|
Estimated Inference Time: 0.99 ms |
|
Estimated Peak Memory Range: 1.61-1.61 MB |
|
Compute Units: NPU (138) | Total (138) |
|
|
|
Profile Job summary of MediaPipePoseLandmarkDetector |
|
-------------------------------------------------- |
|
Device: Snapdragon X Elite CRD (11) |
|
Estimated Inference Time: 1.11 ms |
|
Estimated Peak Memory Range: 0.75-0.75 MB |
|
Compute Units: NPU (290) | Total (290) |
|
|
|
|
|
``` |
|
|
|
|
|
## How does this work? |
|
|
|
This [export script](https://aihub.qualcomm.com/models/mediapipe_pose/qai_hub_models/models/MediaPipe-Pose-Estimation/export.py) |
|
leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model |
|
on-device. Lets go through each step below in detail: |
|
|
|
Step 1: **Compile model for on-device deployment** |
|
|
|
To compile a PyTorch model for on-device deployment, we first trace the model |
|
in memory using the `jit.trace` and then call the `submit_compile_job` API. |
|
|
|
```python |
|
import torch |
|
|
|
import qai_hub as hub |
|
from qai_hub_models.models.mediapipe_pose import MediaPipePoseDetector,MediaPipePoseLandmarkDetector |
|
|
|
# Load the model |
|
pose_detector_model = MediaPipePoseDetector.from_pretrained() |
|
pose_landmark_detector_model = MediaPipePoseLandmarkDetector.from_pretrained() |
|
|
|
# Device |
|
device = hub.Device("Samsung Galaxy S23") |
|
|
|
# Trace model |
|
pose_detector_input_shape = pose_detector_model.get_input_spec() |
|
pose_detector_sample_inputs = pose_detector_model.sample_inputs() |
|
|
|
traced_pose_detector_model = torch.jit.trace(pose_detector_model, [torch.tensor(data[0]) for _, data in pose_detector_sample_inputs.items()]) |
|
|
|
# Compile model on a specific device |
|
pose_detector_compile_job = hub.submit_compile_job( |
|
model=traced_pose_detector_model , |
|
device=device, |
|
input_specs=pose_detector_model.get_input_spec(), |
|
) |
|
|
|
# Get target model to run on-device |
|
pose_detector_target_model = pose_detector_compile_job.get_target_model() |
|
# Trace model |
|
pose_landmark_detector_input_shape = pose_landmark_detector_model.get_input_spec() |
|
pose_landmark_detector_sample_inputs = pose_landmark_detector_model.sample_inputs() |
|
|
|
traced_pose_landmark_detector_model = torch.jit.trace(pose_landmark_detector_model, [torch.tensor(data[0]) for _, data in pose_landmark_detector_sample_inputs.items()]) |
|
|
|
# Compile model on a specific device |
|
pose_landmark_detector_compile_job = hub.submit_compile_job( |
|
model=traced_pose_landmark_detector_model , |
|
device=device, |
|
input_specs=pose_landmark_detector_model.get_input_spec(), |
|
) |
|
|
|
# Get target model to run on-device |
|
pose_landmark_detector_target_model = pose_landmark_detector_compile_job.get_target_model() |
|
|
|
``` |
|
|
|
|
|
Step 2: **Performance profiling on cloud-hosted device** |
|
|
|
After compiling models from step 1. Models can be profiled model on-device using the |
|
`target_model`. Note that this scripts runs the model on a device automatically |
|
provisioned in the cloud. Once the job is submitted, you can navigate to a |
|
provided job URL to view a variety of on-device performance metrics. |
|
```python |
|
pose_detector_profile_job = hub.submit_profile_job( |
|
model=pose_detector_target_model, |
|
device=device, |
|
) |
|
pose_landmark_detector_profile_job = hub.submit_profile_job( |
|
model=pose_landmark_detector_target_model, |
|
device=device, |
|
) |
|
|
|
``` |
|
|
|
Step 3: **Verify on-device accuracy** |
|
|
|
To verify the accuracy of the model on-device, you can run on-device inference |
|
on sample input data on the same cloud hosted device. |
|
```python |
|
pose_detector_input_data = pose_detector_model.sample_inputs() |
|
pose_detector_inference_job = hub.submit_inference_job( |
|
model=pose_detector_target_model, |
|
device=device, |
|
inputs=pose_detector_input_data, |
|
) |
|
pose_detector_inference_job.download_output_data() |
|
pose_landmark_detector_input_data = pose_landmark_detector_model.sample_inputs() |
|
pose_landmark_detector_inference_job = hub.submit_inference_job( |
|
model=pose_landmark_detector_target_model, |
|
device=device, |
|
inputs=pose_landmark_detector_input_data, |
|
) |
|
pose_landmark_detector_inference_job.download_output_data() |
|
|
|
``` |
|
With the output of the model, you can compute like PSNR, relative errors or |
|
spot check the output with expected output. |
|
|
|
**Note**: This on-device profiling and inference requires access to Qualcomm® |
|
AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup). |
|
|
|
|
|
|
|
|
|
## Deploying compiled model to Android |
|
|
|
|
|
The models can be deployed using multiple runtimes: |
|
- TensorFlow Lite (`.tflite` export): [This |
|
tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a |
|
guide to deploy the .tflite model in an Android application. |
|
|
|
|
|
- QNN (`.so` export ): This [sample |
|
app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html) |
|
provides instructions on how to use the `.so` shared library in an Android application. |
|
|
|
|
|
## View on Qualcomm® AI Hub |
|
Get more details on MediaPipe-Pose-Estimation's performance across various devices [here](https://aihub.qualcomm.com/models/mediapipe_pose). |
|
Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/) |
|
|
|
## License |
|
- The license for the original implementation of MediaPipe-Pose-Estimation can be found |
|
[here](https://github.com/zmurez/MediaPipePyTorch/blob/master/LICENSE). |
|
- The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf) |
|
|
|
## References |
|
* [BlazePose: On-device Real-time Body Pose tracking](https://arxiv.org/abs/2006.10204) |
|
* [Source Model Implementation](https://github.com/zmurez/MediaPipePyTorch/) |
|
|
|
## Community |
|
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. |
|
* For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). |
|
|
|
|
|
|