QuickSRNetMedium-Quantized: Optimized for Mobile Deployment
Upscale images and remove image noise
QuickSRNet Medium is designed for upscaling images on mobile platforms to sharpen in real-time.
This model is an implementation of QuickSRNetMedium-Quantized found here. This repository provides scripts to run QuickSRNetMedium-Quantized on Qualcomm® devices. More details on model performance across various devices, can be found here.
Model Details
- Model Type: Super resolution
- Model Stats:
- Model checkpoint: quicksrnet_medium_3x_checkpoint
- Input resolution: 128x128
- Number of parameters: 55.0K
- Model size: 67.2 KB
Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model |
---|---|---|---|---|---|---|---|
Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 1.165 ms | 0 - 76 MB | INT8 | NPU | QuickSRNetMedium-Quantized.tflite |
Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 0.512 ms | 0 - 66 MB | INT8 | NPU | QuickSRNetMedium-Quantized.so |
Installation
This model can be installed as a Python package via pip.
pip install "qai-hub-models[quicksrnetmedium_quantized]"
Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
Sign-in to Qualcomm® AI Hub with your
Qualcomm® ID. Once signed in navigate to Account -> Settings -> API Token
.
With this API token, you can configure your client to run models on the cloud hosted devices.
qai-hub configure --api_token API_TOKEN
Navigate to docs for more information.
Demo off target
The package contains a simple end-to-end demo that downloads pre-trained weights and runs this model on a sample input.
python -m qai_hub_models.models.quicksrnetmedium_quantized.demo
The above demo runs a reference implementation of pre-processing, model inference, and post processing.
NOTE: If you want running in a Jupyter Notebook or Google Colab like environment, please add the following to your cell (instead of the above).
%run -m qai_hub_models.models.quicksrnetmedium_quantized.demo
Run model on a cloud-hosted device
In addition to the demo, you can also run the model on a cloud-hosted Qualcomm® device. This script does the following:
- Performance check on-device on a cloud-hosted device
- Downloads compiled assets that can be deployed on-device for Android.
- Accuracy check between PyTorch and on-device outputs.
python -m qai_hub_models.models.quicksrnetmedium_quantized.export
Profile Job summary of QuickSRNetMedium-Quantized
--------------------------------------------------
Device: Snapdragon X Elite CRD (11)
Estimated Inference Time: 0.55 ms
Estimated Peak Memory Range: 0.05-0.05 MB
Compute Units: NPU (10) | Total (10)
Run demo on a cloud-hosted device
You can also run the demo on-device.
python -m qai_hub_models.models.quicksrnetmedium_quantized.demo --on-device
NOTE: If you want running in a Jupyter Notebook or Google Colab like environment, please add the following to your cell (instead of the above).
%run -m qai_hub_models.models.quicksrnetmedium_quantized.demo -- --on-device
Deploying compiled model to Android
The models can be deployed using multiple runtimes:
TensorFlow Lite (
.tflite
export): This tutorial provides a guide to deploy the .tflite model in an Android application.QNN (
.so
export ): This sample app provides instructions on how to use the.so
shared library in an Android application.
View on Qualcomm® AI Hub
Get more details on QuickSRNetMedium-Quantized's performance across various devices here. Explore all available models on Qualcomm® AI Hub
License
- The license for the original implementation of QuickSRNetMedium-Quantized can be found here.
- The license for the compiled assets for on-device deployment can be found here
References
- QuickSRNet: Plain Single-Image Super-Resolution Architecture for Faster Inference on Mobile Platforms
- Source Model Implementation
Community
- Join our AI Hub Slack community to collaborate, post questions and learn more about on-device AI.
- For questions or feedback please reach out to us.