v0.46.1
Browse filesSee https://github.com/quic/ai-hub-models/releases/v0.46.1 for changelog.
- MobileSam_MobileSAMDecoder_float.dlc +0 -3
- MobileSam_MobileSAMDecoder_float.onnx.zip +0 -3
- MobileSam_MobileSAMDecoder_float.tflite +0 -3
- MobileSam_MobileSAMEncoder_float.dlc +0 -3
- MobileSam_MobileSAMEncoder_float.onnx.zip +0 -3
- MobileSam_MobileSAMEncoder_float.tflite +0 -3
- README.md +94 -255
- tool-versions.yaml +0 -4
MobileSam_MobileSAMDecoder_float.dlc
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:6f9259f8f1081c7f424f444dad4628b306d4b09d2986889273e1b928949689e9
|
| 3 |
-
size 25510844
|
|
|
|
|
|
|
|
|
|
|
|
MobileSam_MobileSAMDecoder_float.onnx.zip
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:2772dd78e84a15855249900109d502870b43b1daac3f057b9eaf614ab4417efd
|
| 3 |
-
size 19071904
|
|
|
|
|
|
|
|
|
|
|
|
MobileSam_MobileSAMDecoder_float.tflite
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:b97822a4892b738b9125bcd61155c0c95528d465a68d5bfd7ab3708e283de381
|
| 3 |
-
size 24802476
|
|
|
|
|
|
|
|
|
|
|
|
MobileSam_MobileSAMEncoder_float.dlc
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:3164dba04256d88eac5cbc73e1d31e7972000b492219407c4093190957862446
|
| 3 |
-
size 28170956
|
|
|
|
|
|
|
|
|
|
|
|
MobileSam_MobileSAMEncoder_float.onnx.zip
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:dc0c69206927a5ac0a350f85b98679167cc1aaaf52a1e31fbdbe3e6d192e25f0
|
| 3 |
-
size 21650039
|
|
|
|
|
|
|
|
|
|
|
|
MobileSam_MobileSAMEncoder_float.tflite
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:2b28a249ea36ac002afe9f51c007a6061f9416ca0ed9c09048f09eb431065564
|
| 3 |
-
size 27920084
|
|
|
|
|
|
|
|
|
|
|
|
README.md
CHANGED
|
@@ -10,273 +10,112 @@ pipeline_tag: image-segmentation
|
|
| 10 |
|
| 11 |

|
| 12 |
|
| 13 |
-
# MobileSam: Optimized for
|
| 14 |
-
## Faster Segment Anything: Towards lightweight SAM for mobile applications
|
| 15 |
-
|
| 16 |
|
| 17 |
Transformer based encoder-decoder where prompts specify what to segment in an image thereby allowing segmentation without the need for additional training. The image encoder generates embeddings and the lightweight decoder operates on the embeddings for point and mask based image segmentation.
|
| 18 |
|
| 19 |
-
This
|
| 20 |
-
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
|
| 41 |
-
|
| 42 |
-
|
| 43 |
-
|
| 44 |
-
|
| 45 |
-
|
| 46 |
-
|
| 47 |
-
|
| 48 |
-
|
| 49 |
-
|
| 50 |
-
|
| 51 |
-
|
| 52 |
-
|
| 53 |
-
|
| 54 |
-
|
| 55 |
-
|
| 56 |
-
|
| 57 |
-
|
| 58 |
-
|
| 59 |
-
|
| 60 |
-
|
| 61 |
-
|
| 62 |
-
|
| 63 |
-
|
| 64 |
-
|
|
| 65 |
-
|
|
| 66 |
-
|
|
| 67 |
-
| MobileSAMDecoder |
|
| 68 |
-
| MobileSAMDecoder |
|
| 69 |
-
| MobileSAMDecoder |
|
| 70 |
-
| MobileSAMDecoder |
|
| 71 |
-
| MobileSAMDecoder |
|
| 72 |
-
| MobileSAMDecoder |
|
| 73 |
-
| MobileSAMDecoder |
|
| 74 |
-
| MobileSAMDecoder |
|
| 75 |
-
| MobileSAMDecoder |
|
| 76 |
-
| MobileSAMDecoder |
|
| 77 |
-
| MobileSAMDecoder |
|
| 78 |
-
| MobileSAMDecoder |
|
| 79 |
-
| MobileSAMDecoder |
|
| 80 |
-
| MobileSAMDecoder |
|
| 81 |
-
| MobileSAMDecoder |
|
| 82 |
-
| MobileSAMDecoder |
|
| 83 |
-
| MobileSAMDecoder |
|
| 84 |
-
| MobileSAMDecoder |
|
| 85 |
-
| MobileSAMDecoder |
|
| 86 |
-
| MobileSAMDecoder |
|
| 87 |
-
| MobileSAMDecoder |
|
| 88 |
-
| MobileSAMDecoder |
|
| 89 |
-
|
|
| 90 |
-
|
|
| 91 |
-
|
|
| 92 |
-
|
|
| 93 |
-
|
| 94 |
-
|
| 95 |
-
|
| 96 |
-
|
| 97 |
-
|
| 98 |
-
|
| 99 |
-
|
| 100 |
-
|
| 101 |
-
|
| 102 |
-
|
| 103 |
-
|
| 104 |
-
|
| 105 |
-
|
| 106 |
-
|
| 107 |
-
|
| 108 |
-
|
| 109 |
-
|
| 110 |
-
Qualcomm®
|
| 111 |
-
|
| 112 |
-
With this API token, you can configure your client to run models on the cloud
|
| 113 |
-
hosted devices.
|
| 114 |
-
```bash
|
| 115 |
-
qai-hub configure --api_token API_TOKEN
|
| 116 |
-
```
|
| 117 |
-
Navigate to [docs](https://workbench.aihub.qualcomm.com/docs/) for more information.
|
| 118 |
-
|
| 119 |
-
|
| 120 |
-
|
| 121 |
-
## Demo off target
|
| 122 |
-
|
| 123 |
-
The package contains a simple end-to-end demo that downloads pre-trained
|
| 124 |
-
weights and runs this model on a sample input.
|
| 125 |
-
|
| 126 |
-
```bash
|
| 127 |
-
python -m qai_hub_models.models.mobilesam.demo
|
| 128 |
-
```
|
| 129 |
-
|
| 130 |
-
The above demo runs a reference implementation of pre-processing, model
|
| 131 |
-
inference, and post processing.
|
| 132 |
-
|
| 133 |
-
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
|
| 134 |
-
environment, please add the following to your cell (instead of the above).
|
| 135 |
-
```
|
| 136 |
-
%run -m qai_hub_models.models.mobilesam.demo
|
| 137 |
-
```
|
| 138 |
-
|
| 139 |
-
|
| 140 |
-
### Run model on a cloud-hosted device
|
| 141 |
-
|
| 142 |
-
In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
|
| 143 |
-
device. This script does the following:
|
| 144 |
-
* Performance check on-device on a cloud-hosted device
|
| 145 |
-
* Downloads compiled assets that can be deployed on-device for Android.
|
| 146 |
-
* Accuracy check between PyTorch and on-device outputs.
|
| 147 |
-
|
| 148 |
-
```bash
|
| 149 |
-
python -m qai_hub_models.models.mobilesam.export
|
| 150 |
-
```
|
| 151 |
-
|
| 152 |
-
|
| 153 |
-
|
| 154 |
-
## How does this work?
|
| 155 |
-
|
| 156 |
-
This [export script](https://aihub.qualcomm.com/models/mobilesam/qai_hub_models/models/MobileSam/export.py)
|
| 157 |
-
leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
|
| 158 |
-
on-device. Lets go through each step below in detail:
|
| 159 |
-
|
| 160 |
-
Step 1: **Compile model for on-device deployment**
|
| 161 |
-
|
| 162 |
-
To compile a PyTorch model for on-device deployment, we first trace the model
|
| 163 |
-
in memory using the `jit.trace` and then call the `submit_compile_job` API.
|
| 164 |
-
|
| 165 |
-
```python
|
| 166 |
-
import torch
|
| 167 |
-
|
| 168 |
-
import qai_hub as hub
|
| 169 |
-
from qai_hub_models.models.mobilesam import Model
|
| 170 |
-
|
| 171 |
-
# Load the model
|
| 172 |
-
torch_model = Model.from_pretrained()
|
| 173 |
-
|
| 174 |
-
# Device
|
| 175 |
-
device = hub.Device("Samsung Galaxy S25")
|
| 176 |
-
|
| 177 |
-
# Trace model
|
| 178 |
-
input_shape = torch_model.get_input_spec()
|
| 179 |
-
sample_inputs = torch_model.sample_inputs()
|
| 180 |
-
|
| 181 |
-
pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
|
| 182 |
-
|
| 183 |
-
# Compile model on a specific device
|
| 184 |
-
compile_job = hub.submit_compile_job(
|
| 185 |
-
model=pt_model,
|
| 186 |
-
device=device,
|
| 187 |
-
input_specs=torch_model.get_input_spec(),
|
| 188 |
-
)
|
| 189 |
-
|
| 190 |
-
# Get target model to run on-device
|
| 191 |
-
target_model = compile_job.get_target_model()
|
| 192 |
-
|
| 193 |
-
```
|
| 194 |
-
|
| 195 |
-
|
| 196 |
-
Step 2: **Performance profiling on cloud-hosted device**
|
| 197 |
-
|
| 198 |
-
After compiling models from step 1. Models can be profiled model on-device using the
|
| 199 |
-
`target_model`. Note that this scripts runs the model on a device automatically
|
| 200 |
-
provisioned in the cloud. Once the job is submitted, you can navigate to a
|
| 201 |
-
provided job URL to view a variety of on-device performance metrics.
|
| 202 |
-
```python
|
| 203 |
-
profile_job = hub.submit_profile_job(
|
| 204 |
-
model=target_model,
|
| 205 |
-
device=device,
|
| 206 |
-
)
|
| 207 |
-
|
| 208 |
-
```
|
| 209 |
-
|
| 210 |
-
Step 3: **Verify on-device accuracy**
|
| 211 |
-
|
| 212 |
-
To verify the accuracy of the model on-device, you can run on-device inference
|
| 213 |
-
on sample input data on the same cloud hosted device.
|
| 214 |
-
```python
|
| 215 |
-
input_data = torch_model.sample_inputs()
|
| 216 |
-
inference_job = hub.submit_inference_job(
|
| 217 |
-
model=target_model,
|
| 218 |
-
device=device,
|
| 219 |
-
inputs=input_data,
|
| 220 |
-
)
|
| 221 |
-
on_device_output = inference_job.download_output_data()
|
| 222 |
-
|
| 223 |
-
```
|
| 224 |
-
With the output of the model, you can compute like PSNR, relative errors or
|
| 225 |
-
spot check the output with expected output.
|
| 226 |
-
|
| 227 |
-
**Note**: This on-device profiling and inference requires access to Qualcomm®
|
| 228 |
-
AI Hub Workbench. [Sign up for access](https://myaccount.qualcomm.com/signup).
|
| 229 |
-
|
| 230 |
-
|
| 231 |
-
|
| 232 |
-
## Run demo on a cloud-hosted device
|
| 233 |
-
|
| 234 |
-
You can also run the demo on-device.
|
| 235 |
-
|
| 236 |
-
```bash
|
| 237 |
-
python -m qai_hub_models.models.mobilesam.demo --eval-mode on-device
|
| 238 |
-
```
|
| 239 |
-
|
| 240 |
-
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
|
| 241 |
-
environment, please add the following to your cell (instead of the above).
|
| 242 |
-
```
|
| 243 |
-
%run -m qai_hub_models.models.mobilesam.demo -- --eval-mode on-device
|
| 244 |
-
```
|
| 245 |
-
|
| 246 |
-
|
| 247 |
-
## Deploying compiled model to Android
|
| 248 |
-
|
| 249 |
-
|
| 250 |
-
The models can be deployed using multiple runtimes:
|
| 251 |
-
- TensorFlow Lite (`.tflite` export): [This
|
| 252 |
-
tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
|
| 253 |
-
guide to deploy the .tflite model in an Android application.
|
| 254 |
-
|
| 255 |
-
|
| 256 |
-
- QNN (`.so` export ): This [sample
|
| 257 |
-
app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
|
| 258 |
-
provides instructions on how to use the `.so` shared library in an Android application.
|
| 259 |
-
|
| 260 |
-
|
| 261 |
-
## View on Qualcomm® AI Hub
|
| 262 |
-
Get more details on MobileSam's performance across various devices [here](https://aihub.qualcomm.com/models/mobilesam).
|
| 263 |
-
Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
|
| 264 |
-
|
| 265 |
|
| 266 |
## License
|
| 267 |
* The license for the original implementation of MobileSam can be found
|
| 268 |
[here](https://github.com/facebookresearch/segment-anything/blob/main/LICENSE).
|
| 269 |
|
| 270 |
-
|
| 271 |
-
|
| 272 |
## References
|
| 273 |
* [Segment Anything](https://arxiv.org/abs/2306.14289)
|
| 274 |
* [Source Model Implementation](https://github.com/facebookresearch/segment-anything)
|
| 275 |
|
| 276 |
-
|
| 277 |
-
|
| 278 |
## Community
|
| 279 |
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
|
| 280 |
* For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com).
|
| 281 |
-
|
| 282 |
-
|
|
|
|
| 10 |
|
| 11 |

|
| 12 |
|
| 13 |
+
# MobileSam: Optimized for Qualcomm Devices
|
|
|
|
|
|
|
| 14 |
|
| 15 |
Transformer based encoder-decoder where prompts specify what to segment in an image thereby allowing segmentation without the need for additional training. The image encoder generates embeddings and the lightweight decoder operates on the embeddings for point and mask based image segmentation.
|
| 16 |
|
| 17 |
+
This is based on the implementation of MobileSam found [here](https://github.com/facebookresearch/segment-anything).
|
| 18 |
+
This repository contains pre-exported model files optimized for Qualcomm® devices. You can use the [Qualcomm® AI Hub Models](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/mobilesam) library to export with custom configurations. More details on model performance across various devices, can be found [here](#performance-summary).
|
| 19 |
+
|
| 20 |
+
Qualcomm AI Hub Models uses [Qualcomm AI Hub Workbench](https://workbench.aihub.qualcomm.com) to compile, profile, and evaluate this model. [Sign up](https://myaccount.qualcomm.com/signup) to run these models on a hosted Qualcomm® device.
|
| 21 |
+
|
| 22 |
+
## Getting Started
|
| 23 |
+
There are two ways to deploy this model on your device:
|
| 24 |
+
|
| 25 |
+
### Option 1: Download Pre-Exported Models
|
| 26 |
+
|
| 27 |
+
Below are pre-exported model assets ready for deployment.
|
| 28 |
+
|
| 29 |
+
| Runtime | Precision | Chipset | SDK Versions | Download |
|
| 30 |
+
|---|---|---|---|---|
|
| 31 |
+
| ONNX | float | Universal | QAIRT 2.37, ONNX Runtime 1.23.0 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/mobilesam/releases/v0.46.1/mobilesam-onnx-float.zip)
|
| 32 |
+
| QNN_DLC | float | Universal | QAIRT 2.42 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/mobilesam/releases/v0.46.1/mobilesam-qnn_dlc-float.zip)
|
| 33 |
+
| TFLITE | float | Universal | QAIRT 2.42, TFLite 2.17.0 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/mobilesam/releases/v0.46.1/mobilesam-tflite-float.zip)
|
| 34 |
+
|
| 35 |
+
For more device-specific assets and performance metrics, visit **[MobileSam on Qualcomm® AI Hub](https://aihub.qualcomm.com/models/mobilesam)**.
|
| 36 |
+
|
| 37 |
+
|
| 38 |
+
### Option 2: Export with Custom Configurations
|
| 39 |
+
|
| 40 |
+
Use the [Qualcomm® AI Hub Models](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/mobilesam) Python library to compile and export the model with your own:
|
| 41 |
+
- Custom weights (e.g., fine-tuned checkpoints)
|
| 42 |
+
- Custom input shapes
|
| 43 |
+
- Target device and runtime configurations
|
| 44 |
+
|
| 45 |
+
This option is ideal if you need to customize the model beyond the default configuration provided here.
|
| 46 |
+
|
| 47 |
+
See our repository for [MobileSam on GitHub](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/mobilesam) for usage instructions.
|
| 48 |
+
|
| 49 |
+
## Model Details
|
| 50 |
+
|
| 51 |
+
**Model Type:** Model_use_case.semantic_segmentation
|
| 52 |
+
|
| 53 |
+
**Model Stats:**
|
| 54 |
+
- Model checkpoint: vit_t
|
| 55 |
+
- Input resolution: 720p (720x1280)
|
| 56 |
+
- Number of parameters (SAMEncoder): 6.95M
|
| 57 |
+
- Model size (SAMEncoder) (float): 26.6 MB
|
| 58 |
+
- Number of parameters (SAMDecoder): 6.16M
|
| 59 |
+
- Model size (SAMDecoder) (float): 23.7 MB
|
| 60 |
+
|
| 61 |
+
## Performance Summary
|
| 62 |
+
| Model | Runtime | Precision | Chipset | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit
|
| 63 |
+
|---|---|---|---|---|---|---
|
| 64 |
+
| MobileSAMDecoder | ONNX | float | Snapdragon® 8 Gen 3 Mobile | 5.734 ms | 5 - 218 MB | NPU
|
| 65 |
+
| MobileSAMDecoder | ONNX | float | Qualcomm® QCS8550 (Proxy) | 8.389 ms | 4 - 230 MB | NPU
|
| 66 |
+
| MobileSAMDecoder | ONNX | float | Qualcomm® QCS9075 | 10.04 ms | 4 - 7 MB | NPU
|
| 67 |
+
| MobileSAMDecoder | ONNX | float | Snapdragon® 8 Elite For Galaxy Mobile | 4.263 ms | 2 - 180 MB | NPU
|
| 68 |
+
| MobileSAMDecoder | ONNX | float | Snapdragon® 8 Elite Gen 5 Mobile | 3.481 ms | 0 - 186 MB | NPU
|
| 69 |
+
| MobileSAMDecoder | QNN_DLC | float | Snapdragon® X Elite | 5.137 ms | 4 - 4 MB | NPU
|
| 70 |
+
| MobileSAMDecoder | QNN_DLC | float | Snapdragon® 8 Gen 3 Mobile | 3.319 ms | 4 - 230 MB | NPU
|
| 71 |
+
| MobileSAMDecoder | QNN_DLC | float | Qualcomm® QCS8275 (Proxy) | 11.595 ms | 2 - 199 MB | NPU
|
| 72 |
+
| MobileSAMDecoder | QNN_DLC | float | Qualcomm® QCS8550 (Proxy) | 4.692 ms | 4 - 6 MB | NPU
|
| 73 |
+
| MobileSAMDecoder | QNN_DLC | float | Qualcomm® SA8775P | 5.737 ms | 1 - 200 MB | NPU
|
| 74 |
+
| MobileSAMDecoder | QNN_DLC | float | Qualcomm® QCS8450 (Proxy) | 8.499 ms | 0 - 222 MB | NPU
|
| 75 |
+
| MobileSAMDecoder | QNN_DLC | float | Qualcomm® SA7255P | 11.595 ms | 2 - 199 MB | NPU
|
| 76 |
+
| MobileSAMDecoder | QNN_DLC | float | Qualcomm® SA8295P | 7.157 ms | 0 - 193 MB | NPU
|
| 77 |
+
| MobileSAMDecoder | QNN_DLC | float | Snapdragon® 8 Elite For Galaxy Mobile | 2.538 ms | 0 - 198 MB | NPU
|
| 78 |
+
| MobileSAMDecoder | QNN_DLC | float | Snapdragon® 8 Elite Gen 5 Mobile | 2.182 ms | 4 - 203 MB | NPU
|
| 79 |
+
| MobileSAMDecoder | TFLITE | float | Snapdragon® 8 Gen 3 Mobile | 3.854 ms | 0 - 237 MB | NPU
|
| 80 |
+
| MobileSAMDecoder | TFLITE | float | Qualcomm® QCS8275 (Proxy) | 12.771 ms | 0 - 208 MB | NPU
|
| 81 |
+
| MobileSAMDecoder | TFLITE | float | Qualcomm® QCS8550 (Proxy) | 5.561 ms | 0 - 18 MB | NPU
|
| 82 |
+
| MobileSAMDecoder | TFLITE | float | Qualcomm® SA8775P | 6.547 ms | 0 - 209 MB | NPU
|
| 83 |
+
| MobileSAMDecoder | TFLITE | float | Qualcomm® QCS8450 (Proxy) | 9.544 ms | 0 - 225 MB | NPU
|
| 84 |
+
| MobileSAMDecoder | TFLITE | float | Qualcomm® SA7255P | 12.771 ms | 0 - 208 MB | NPU
|
| 85 |
+
| MobileSAMDecoder | TFLITE | float | Qualcomm® SA8295P | 8.152 ms | 0 - 205 MB | NPU
|
| 86 |
+
| MobileSAMDecoder | TFLITE | float | Snapdragon® 8 Elite For Galaxy Mobile | 2.935 ms | 0 - 209 MB | NPU
|
| 87 |
+
| MobileSAMEncoder | ONNX | float | Snapdragon® 8 Gen 3 Mobile | 254.846 ms | 128 - 866 MB | NPU
|
| 88 |
+
| MobileSAMEncoder | ONNX | float | Qualcomm® QCS8550 (Proxy) | 341.905 ms | 114 - 116 MB | NPU
|
| 89 |
+
| MobileSAMEncoder | ONNX | float | Qualcomm® QCS9075 | 414.464 ms | 68 - 71 MB | NPU
|
| 90 |
+
| MobileSAMEncoder | ONNX | float | Snapdragon® 8 Elite For Galaxy Mobile | 185.472 ms | 117 - 762 MB | NPU
|
| 91 |
+
| MobileSAMEncoder | ONNX | float | Snapdragon® 8 Elite Gen 5 Mobile | 152.087 ms | 130 - 692 MB | NPU
|
| 92 |
+
| MobileSAMEncoder | QNN_DLC | float | Snapdragon® X Elite | 99.194 ms | 12 - 12 MB | NPU
|
| 93 |
+
| MobileSAMEncoder | QNN_DLC | float | Snapdragon® 8 Gen 3 Mobile | 65.564 ms | 11 - 1629 MB | NPU
|
| 94 |
+
| MobileSAMEncoder | QNN_DLC | float | Qualcomm® QCS8275 (Proxy) | 209.778 ms | 0 - 1209 MB | NPU
|
| 95 |
+
| MobileSAMEncoder | QNN_DLC | float | Qualcomm® QCS8550 (Proxy) | 96.271 ms | 12 - 16 MB | NPU
|
| 96 |
+
| MobileSAMEncoder | QNN_DLC | float | Qualcomm® SA8775P | 102.588 ms | 0 - 1614 MB | NPU
|
| 97 |
+
| MobileSAMEncoder | QNN_DLC | float | Qualcomm® QCS8450 (Proxy) | 467.917 ms | 12 - 1732 MB | NPU
|
| 98 |
+
| MobileSAMEncoder | QNN_DLC | float | Qualcomm® SA7255P | 209.778 ms | 0 - 1209 MB | NPU
|
| 99 |
+
| MobileSAMEncoder | QNN_DLC | float | Qualcomm® SA8295P | 474.029 ms | 0 - 1248 MB | NPU
|
| 100 |
+
| MobileSAMEncoder | QNN_DLC | float | Snapdragon® 8 Elite For Galaxy Mobile | 49.974 ms | 12 - 1214 MB | NPU
|
| 101 |
+
| MobileSAMEncoder | QNN_DLC | float | Snapdragon® 8 Elite Gen 5 Mobile | 44.853 ms | 11 - 1322 MB | NPU
|
| 102 |
+
| MobileSAMEncoder | TFLITE | float | Snapdragon® 8 Gen 3 Mobile | 127.122 ms | 4 - 1521 MB | NPU
|
| 103 |
+
| MobileSAMEncoder | TFLITE | float | Qualcomm® QCS8275 (Proxy) | 292.444 ms | 4 - 1275 MB | NPU
|
| 104 |
+
| MobileSAMEncoder | TFLITE | float | Qualcomm® QCS8550 (Proxy) | 176.108 ms | 0 - 3 MB | NPU
|
| 105 |
+
| MobileSAMEncoder | TFLITE | float | Qualcomm® SA8775P | 180.445 ms | 4 - 1275 MB | NPU
|
| 106 |
+
| MobileSAMEncoder | TFLITE | float | Qualcomm® QCS8450 (Proxy) | 588.02 ms | 0 - 3179 MB | NPU
|
| 107 |
+
| MobileSAMEncoder | TFLITE | float | Qualcomm® SA7255P | 292.444 ms | 4 - 1275 MB | NPU
|
| 108 |
+
| MobileSAMEncoder | TFLITE | float | Qualcomm® SA8295P | 504.031 ms | 4 - 1899 MB | NPU
|
| 109 |
+
| MobileSAMEncoder | TFLITE | float | Snapdragon® 8 Elite For Galaxy Mobile | 96.546 ms | 3 - 1271 MB | NPU
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 110 |
|
| 111 |
## License
|
| 112 |
* The license for the original implementation of MobileSam can be found
|
| 113 |
[here](https://github.com/facebookresearch/segment-anything/blob/main/LICENSE).
|
| 114 |
|
|
|
|
|
|
|
| 115 |
## References
|
| 116 |
* [Segment Anything](https://arxiv.org/abs/2306.14289)
|
| 117 |
* [Source Model Implementation](https://github.com/facebookresearch/segment-anything)
|
| 118 |
|
|
|
|
|
|
|
| 119 |
## Community
|
| 120 |
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
|
| 121 |
* For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com).
|
|
|
|
|
|
tool-versions.yaml
DELETED
|
@@ -1,4 +0,0 @@
|
|
| 1 |
-
tool_versions:
|
| 2 |
-
onnx:
|
| 3 |
-
qairt: 2.37.1.250807093845_124904
|
| 4 |
-
onnx_runtime: 1.23.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|