OpenAI-Clip: Optimized for Mobile Deployment
Multi-modal foundational model for vision and language tasks like image/text similarity and for zero-shot image classification
Contrastive Language-Image Pre-Training (CLIP) uses a ViT like transformer to get visual features and a causal language model to get the text features. Both the text and visual features can then be used for a variety of zero-shot learning tasks.
This model is an implementation of OpenAI-Clip found here.
This repository provides scripts to run OpenAI-Clip on Qualcomm® devices. More details on model performance across various devices, can be found here.
Model Details
- Model Type: Image classification
- Model Stats:
- Model checkpoint: ViT-B/16
- Image input resolution: 224x224
- Text context length: 77
- Number of parameters (CLIPTextEncoder): 76.0M
- Model size (CLIPTextEncoder): 290 MB
- Number of parameters (CLIPImageEncoder): 115M
- Model size (CLIPImageEncoder): 437 MB
Model | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model |
---|---|---|---|---|---|---|---|---|
CLIPImageEncoder | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | TFLITE | 34.591 ms | 0 - 57 MB | FP16 | NPU | OpenAI-Clip.tflite |
CLIPImageEncoder | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | QNN | 26.472 ms | 0 - 55 MB | FP16 | NPU | OpenAI-Clip.so |
CLIPImageEncoder | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | TFLITE | 27.035 ms | 0 - 264 MB | FP16 | NPU | OpenAI-Clip.tflite |
CLIPImageEncoder | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | QNN | 20.808 ms | 1 - 170 MB | FP16 | NPU | OpenAI-Clip.so |
CLIPImageEncoder | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | TFLITE | 24.249 ms | 0 - 266 MB | FP16 | NPU | OpenAI-Clip.tflite |
CLIPImageEncoder | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | QNN | 18.669 ms | 0 - 171 MB | FP16 | NPU | Use Export Script |
CLIPImageEncoder | QCS8550 (Proxy) | QCS8550 Proxy | TFLITE | 33.984 ms | 0 - 55 MB | FP16 | NPU | OpenAI-Clip.tflite |
CLIPImageEncoder | QCS8550 (Proxy) | QCS8550 Proxy | QNN | 19.984 ms | 1 - 2 MB | FP16 | NPU | Use Export Script |
CLIPImageEncoder | SA7255P ADP | SA7255P | TFLITE | 327.04 ms | 0 - 264 MB | FP16 | NPU | OpenAI-Clip.tflite |
CLIPImageEncoder | SA7255P ADP | SA7255P | QNN | 265.55 ms | 1 - 11 MB | FP16 | NPU | Use Export Script |
CLIPImageEncoder | SA8255 (Proxy) | SA8255P Proxy | TFLITE | 34.335 ms | 0 - 54 MB | FP16 | NPU | OpenAI-Clip.tflite |
CLIPImageEncoder | SA8255 (Proxy) | SA8255P Proxy | QNN | 20.528 ms | 1 - 2 MB | FP16 | NPU | Use Export Script |
CLIPImageEncoder | SA8295P ADP | SA8295P | TFLITE | 40.114 ms | 0 - 200 MB | FP16 | NPU | OpenAI-Clip.tflite |
CLIPImageEncoder | SA8295P ADP | SA8295P | QNN | 30.939 ms | 1 - 7 MB | FP16 | NPU | Use Export Script |
CLIPImageEncoder | SA8650 (Proxy) | SA8650P Proxy | TFLITE | 34.062 ms | 0 - 58 MB | FP16 | NPU | OpenAI-Clip.tflite |
CLIPImageEncoder | SA8650 (Proxy) | SA8650P Proxy | QNN | 20.836 ms | 1 - 2 MB | FP16 | NPU | Use Export Script |
CLIPImageEncoder | SA8775P ADP | SA8775P | TFLITE | 42.508 ms | 0 - 264 MB | FP16 | NPU | OpenAI-Clip.tflite |
CLIPImageEncoder | SA8775P ADP | SA8775P | QNN | 29.748 ms | 1 - 11 MB | FP16 | NPU | Use Export Script |
CLIPImageEncoder | QCS8450 (Proxy) | QCS8450 Proxy | TFLITE | 34.902 ms | 0 - 201 MB | FP16 | NPU | OpenAI-Clip.tflite |
CLIPImageEncoder | QCS8450 (Proxy) | QCS8450 Proxy | QNN | 28.971 ms | 0 - 169 MB | FP16 | NPU | Use Export Script |
CLIPImageEncoder | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN | 22.167 ms | 1 - 1 MB | FP16 | NPU | Use Export Script |
CLIPTextEncoder | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | TFLITE | 5.809 ms | 0 - 24 MB | FP16 | NPU | OpenAI-Clip.tflite |
CLIPTextEncoder | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | QNN | 4.636 ms | 0 - 18 MB | FP16 | NPU | OpenAI-Clip.so |
CLIPTextEncoder | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | TFLITE | 3.991 ms | 0 - 83 MB | FP16 | NPU | OpenAI-Clip.tflite |
CLIPTextEncoder | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | QNN | 3.281 ms | 0 - 68 MB | FP16 | NPU | OpenAI-Clip.so |
CLIPTextEncoder | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | TFLITE | 3.351 ms | 0 - 83 MB | FP16 | NPU | OpenAI-Clip.tflite |
CLIPTextEncoder | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | QNN | 3.197 ms | 0 - 68 MB | FP16 | NPU | Use Export Script |
CLIPTextEncoder | QCS8550 (Proxy) | QCS8550 Proxy | TFLITE | 5.613 ms | 0 - 23 MB | FP16 | NPU | OpenAI-Clip.tflite |
CLIPTextEncoder | QCS8550 (Proxy) | QCS8550 Proxy | QNN | 4.743 ms | 0 - 1 MB | FP16 | NPU | Use Export Script |
CLIPTextEncoder | SA7255P ADP | SA7255P | TFLITE | 61.341 ms | 0 - 82 MB | FP16 | NPU | OpenAI-Clip.tflite |
CLIPTextEncoder | SA7255P ADP | SA7255P | QNN | 51.576 ms | 0 - 11 MB | FP16 | NPU | Use Export Script |
CLIPTextEncoder | SA8255 (Proxy) | SA8255P Proxy | TFLITE | 5.729 ms | 0 - 23 MB | FP16 | NPU | OpenAI-Clip.tflite |
CLIPTextEncoder | SA8255 (Proxy) | SA8255P Proxy | QNN | 4.772 ms | 0 - 1 MB | FP16 | NPU | Use Export Script |
CLIPTextEncoder | SA8295P ADP | SA8295P | TFLITE | 7.632 ms | 0 - 68 MB | FP16 | NPU | OpenAI-Clip.tflite |
CLIPTextEncoder | SA8295P ADP | SA8295P | QNN | 6.53 ms | 0 - 6 MB | FP16 | NPU | Use Export Script |
CLIPTextEncoder | SA8650 (Proxy) | SA8650P Proxy | TFLITE | 5.678 ms | 0 - 19 MB | FP16 | NPU | OpenAI-Clip.tflite |
CLIPTextEncoder | SA8650 (Proxy) | SA8650P Proxy | QNN | 4.872 ms | 0 - 1 MB | FP16 | NPU | Use Export Script |
CLIPTextEncoder | SA8775P ADP | SA8775P | TFLITE | 8.137 ms | 0 - 81 MB | FP16 | NPU | OpenAI-Clip.tflite |
CLIPTextEncoder | SA8775P ADP | SA8775P | QNN | 6.947 ms | 0 - 6 MB | FP16 | NPU | Use Export Script |
CLIPTextEncoder | QCS8450 (Proxy) | QCS8450 Proxy | TFLITE | 6.349 ms | 0 - 74 MB | FP16 | NPU | OpenAI-Clip.tflite |
CLIPTextEncoder | QCS8450 (Proxy) | QCS8450 Proxy | QNN | 5.399 ms | 0 - 71 MB | FP16 | NPU | Use Export Script |
CLIPTextEncoder | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN | 5.08 ms | 0 - 0 MB | FP16 | NPU | Use Export Script |
Installation
This model can be installed as a Python package via pip.
pip install "qai-hub-models[openai_clip]"
Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
Sign-in to Qualcomm® AI Hub with your
Qualcomm® ID. Once signed in navigate to Account -> Settings -> API Token
.
With this API token, you can configure your client to run models on the cloud hosted devices.
qai-hub configure --api_token API_TOKEN
Navigate to docs for more information.
Demo off target
The package contains a simple end-to-end demo that downloads pre-trained weights and runs this model on a sample input.
python -m qai_hub_models.models.openai_clip.demo
The above demo runs a reference implementation of pre-processing, model inference, and post processing.
NOTE: If you want running in a Jupyter Notebook or Google Colab like environment, please add the following to your cell (instead of the above).
%run -m qai_hub_models.models.openai_clip.demo
Run model on a cloud-hosted device
In addition to the demo, you can also run the model on a cloud-hosted Qualcomm® device. This script does the following:
- Performance check on-device on a cloud-hosted device
- Downloads compiled assets that can be deployed on-device for Android.
- Accuracy check between PyTorch and on-device outputs.
python -m qai_hub_models.models.openai_clip.export
Profiling Results
------------------------------------------------------------
CLIPImageEncoder
Device : Samsung Galaxy S23 (13)
Runtime : TFLITE
Estimated inference time (ms) : 34.6
Estimated peak memory usage (MB): [0, 57]
Total # Ops : 659
Compute Unit(s) : NPU (659 ops)
------------------------------------------------------------
CLIPTextEncoder
Device : Samsung Galaxy S23 (13)
Runtime : TFLITE
Estimated inference time (ms) : 5.8
Estimated peak memory usage (MB): [0, 24]
Total # Ops : 660
Compute Unit(s) : NPU (658 ops) CPU (2 ops)
How does this work?
This export script leverages Qualcomm® AI Hub to optimize, validate, and deploy this model on-device. Lets go through each step below in detail:
Step 1: Compile model for on-device deployment
To compile a PyTorch model for on-device deployment, we first trace the model
in memory using the jit.trace
and then call the submit_compile_job
API.
import torch
import qai_hub as hub
from qai_hub_models.models.openai_clip import Model
# Load the model
model = Model.from_pretrained()
image_encoder_model = model.image_encoder
text_encoder_model = model.text_encoder
# Device
device = hub.Device("Samsung Galaxy S23")
# Trace model
image_encoder_input_shape = image_encoder_model.get_input_spec()
image_encoder_sample_inputs = image_encoder_model.sample_inputs()
traced_image_encoder_model = torch.jit.trace(image_encoder_model, [torch.tensor(data[0]) for _, data in image_encoder_sample_inputs.items()])
# Compile model on a specific device
image_encoder_compile_job = hub.submit_compile_job(
model=traced_image_encoder_model ,
device=device,
input_specs=image_encoder_model.get_input_spec(),
)
# Get target model to run on-device
image_encoder_target_model = image_encoder_compile_job.get_target_model()
# Trace model
text_encoder_input_shape = text_encoder_model.get_input_spec()
text_encoder_sample_inputs = text_encoder_model.sample_inputs()
traced_text_encoder_model = torch.jit.trace(text_encoder_model, [torch.tensor(data[0]) for _, data in text_encoder_sample_inputs.items()])
# Compile model on a specific device
text_encoder_compile_job = hub.submit_compile_job(
model=traced_text_encoder_model ,
device=device,
input_specs=text_encoder_model.get_input_spec(),
)
# Get target model to run on-device
text_encoder_target_model = text_encoder_compile_job.get_target_model()
Step 2: Performance profiling on cloud-hosted device
After compiling models from step 1. Models can be profiled model on-device using the
target_model
. Note that this scripts runs the model on a device automatically
provisioned in the cloud. Once the job is submitted, you can navigate to a
provided job URL to view a variety of on-device performance metrics.
image_encoder_profile_job = hub.submit_profile_job(
model=image_encoder_target_model,
device=device,
)
text_encoder_profile_job = hub.submit_profile_job(
model=text_encoder_target_model,
device=device,
)
Step 3: Verify on-device accuracy
To verify the accuracy of the model on-device, you can run on-device inference on sample input data on the same cloud hosted device.
image_encoder_input_data = image_encoder_model.sample_inputs()
image_encoder_inference_job = hub.submit_inference_job(
model=image_encoder_target_model,
device=device,
inputs=image_encoder_input_data,
)
image_encoder_inference_job.download_output_data()
text_encoder_input_data = text_encoder_model.sample_inputs()
text_encoder_inference_job = hub.submit_inference_job(
model=text_encoder_target_model,
device=device,
inputs=text_encoder_input_data,
)
text_encoder_inference_job.download_output_data()
With the output of the model, you can compute like PSNR, relative errors or spot check the output with expected output.
Note: This on-device profiling and inference requires access to Qualcomm® AI Hub. Sign up for access.
Deploying compiled model to Android
The models can be deployed using multiple runtimes:
TensorFlow Lite (
.tflite
export): This tutorial provides a guide to deploy the .tflite model in an Android application.QNN (
.so
export ): This sample app provides instructions on how to use the.so
shared library in an Android application.
View on Qualcomm® AI Hub
Get more details on OpenAI-Clip's performance across various devices here. Explore all available models on Qualcomm® AI Hub
License
- The license for the original implementation of OpenAI-Clip can be found here.
- The license for the compiled assets for on-device deployment can be found here
References
Community
- Join our AI Hub Slack community to collaborate, post questions and learn more about on-device AI.
- For questions or feedback please reach out to us.