qaihm-bot commited on
Commit
f0b3ea8
·
verified ·
1 Parent(s): 7cf2c9c

See https://github.com/quic/ai-hub-models/releases/v0.46.1 for changelog.

FastSam-X_float.dlc DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:751fc647f12b7281ebaf0195eb2e18f1ed80dc975a96cc0b7aa078dcb0416d12
3
- size 289287916
 
 
 
 
FastSam-X_float.onnx.zip DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:d0824d92a192130252181b041eb6e5a0657acf132c4f4e8576c4df792c8374a0
3
- size 231425634
 
 
 
 
FastSam-X_float.tflite DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:5e736b0e3d647e63a03fd572039dbdff48941971c5aa0462568d2192c98f5bc8
3
- size 289009444
 
 
 
 
README.md CHANGED
@@ -9,246 +9,92 @@ pipeline_tag: image-segmentation
9
 
10
  ![](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/fastsam_x/web-assets/model_demo.png)
11
 
12
- # FastSam-X: Optimized for Mobile Deployment
13
- ## Generate high quality segmentation mask on device
14
-
15
 
16
  The Fast Segment Anything Model (FastSAM) is a novel, real-time CNN-based solution for the Segment Anything task. This task is designed to segment any object within an image based on various possible user interaction prompts. The model performs competitively despite significantly reduced computation, making it a practical choice for a variety of vision tasks.
17
 
18
- This model is an implementation of FastSam-X found [here](https://github.com/CASIA-IVA-Lab/FastSAM).
19
-
20
-
21
- This repository provides scripts to run FastSam-X on Qualcomm® devices.
22
- More details on model performance across various devices, can be found
23
- [here](https://aihub.qualcomm.com/models/fastsam_x).
24
-
25
-
26
-
27
- ### Model Details
28
-
29
- - **Model Type:** Model_use_case.semantic_segmentation
30
- - **Model Stats:**
31
- - Model checkpoint: fastsam-x.pt
32
- - Inference latency: RealTime
33
- - Input resolution: 640x640
34
- - Number of parameters: 72.2M
35
- - Model size (float): 276 MB
36
-
37
- | Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model
38
- |---|---|---|---|---|---|---|---|---|
39
- | FastSam-X | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 278.499 ms | 5 - 239 MB | NPU | [FastSam-X.tflite](https://huggingface.co/qualcomm/FastSam-X/blob/main/FastSam-X.tflite) |
40
- | FastSam-X | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 277.324 ms | 2 - 205 MB | NPU | [FastSam-X.dlc](https://huggingface.co/qualcomm/FastSam-X/blob/main/FastSam-X.dlc) |
41
- | FastSam-X | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 101.91 ms | 4 - 485 MB | NPU | [FastSam-X.tflite](https://huggingface.co/qualcomm/FastSam-X/blob/main/FastSam-X.tflite) |
42
- | FastSam-X | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 90.342 ms | 5 - 361 MB | NPU | [FastSam-X.dlc](https://huggingface.co/qualcomm/FastSam-X/blob/main/FastSam-X.dlc) |
43
- | FastSam-X | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 44.497 ms | 0 - 3 MB | NPU | [FastSam-X.tflite](https://huggingface.co/qualcomm/FastSam-X/blob/main/FastSam-X.tflite) |
44
- | FastSam-X | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 42.743 ms | 5 - 8 MB | NPU | [FastSam-X.dlc](https://huggingface.co/qualcomm/FastSam-X/blob/main/FastSam-X.dlc) |
45
- | FastSam-X | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | ONNX | 45.136 ms | 0 - 162 MB | NPU | [FastSam-X.onnx.zip](https://huggingface.co/qualcomm/FastSam-X/blob/main/FastSam-X.onnx.zip) |
46
- | FastSam-X | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 69.191 ms | 4 - 239 MB | NPU | [FastSam-X.tflite](https://huggingface.co/qualcomm/FastSam-X/blob/main/FastSam-X.tflite) |
47
- | FastSam-X | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 317.706 ms | 1 - 200 MB | NPU | [FastSam-X.dlc](https://huggingface.co/qualcomm/FastSam-X/blob/main/FastSam-X.dlc) |
48
- | FastSam-X | float | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 278.499 ms | 5 - 239 MB | NPU | [FastSam-X.tflite](https://huggingface.co/qualcomm/FastSam-X/blob/main/FastSam-X.tflite) |
49
- | FastSam-X | float | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 277.324 ms | 2 - 205 MB | NPU | [FastSam-X.dlc](https://huggingface.co/qualcomm/FastSam-X/blob/main/FastSam-X.dlc) |
50
- | FastSam-X | float | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 77.574 ms | 2 - 302 MB | NPU | [FastSam-X.tflite](https://huggingface.co/qualcomm/FastSam-X/blob/main/FastSam-X.tflite) |
51
- | FastSam-X | float | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 77.532 ms | 0 - 263 MB | NPU | [FastSam-X.dlc](https://huggingface.co/qualcomm/FastSam-X/blob/main/FastSam-X.dlc) |
52
- | FastSam-X | float | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 69.191 ms | 4 - 239 MB | NPU | [FastSam-X.tflite](https://huggingface.co/qualcomm/FastSam-X/blob/main/FastSam-X.tflite) |
53
- | FastSam-X | float | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 317.706 ms | 1 - 200 MB | NPU | [FastSam-X.dlc](https://huggingface.co/qualcomm/FastSam-X/blob/main/FastSam-X.dlc) |
54
- | FastSam-X | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 33.767 ms | 3 - 418 MB | NPU | [FastSam-X.tflite](https://huggingface.co/qualcomm/FastSam-X/blob/main/FastSam-X.tflite) |
55
- | FastSam-X | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 32.488 ms | 5 - 293 MB | NPU | [FastSam-X.dlc](https://huggingface.co/qualcomm/FastSam-X/blob/main/FastSam-X.dlc) |
56
- | FastSam-X | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 35.91 ms | 15 - 283 MB | NPU | [FastSam-X.onnx.zip](https://huggingface.co/qualcomm/FastSam-X/blob/main/FastSam-X.onnx.zip) |
57
- | FastSam-X | float | Samsung Galaxy S25 | Snapdragon® 8 Elite For Galaxy Mobile | TFLITE | 25.497 ms | 4 - 235 MB | NPU | [FastSam-X.tflite](https://huggingface.co/qualcomm/FastSam-X/blob/main/FastSam-X.tflite) |
58
- | FastSam-X | float | Samsung Galaxy S25 | Snapdragon® 8 Elite For Galaxy Mobile | QNN_DLC | 25.275 ms | 5 - 200 MB | NPU | [FastSam-X.dlc](https://huggingface.co/qualcomm/FastSam-X/blob/main/FastSam-X.dlc) |
59
- | FastSam-X | float | Samsung Galaxy S25 | Snapdragon® 8 Elite For Galaxy Mobile | ONNX | 27.205 ms | 11 - 188 MB | NPU | [FastSam-X.onnx.zip](https://huggingface.co/qualcomm/FastSam-X/blob/main/FastSam-X.onnx.zip) |
60
- | FastSam-X | float | Snapdragon 8 Elite Gen 5 QRD | Snapdragon® 8 Elite Gen 5 Mobile | TFLITE | 17.698 ms | 4 - 242 MB | NPU | [FastSam-X.tflite](https://huggingface.co/qualcomm/FastSam-X/blob/main/FastSam-X.tflite) |
61
- | FastSam-X | float | Snapdragon 8 Elite Gen 5 QRD | Snapdragon® 8 Elite Gen 5 Mobile | QNN_DLC | 17.445 ms | 5 - 221 MB | NPU | [FastSam-X.dlc](https://huggingface.co/qualcomm/FastSam-X/blob/main/FastSam-X.dlc) |
62
- | FastSam-X | float | Snapdragon 8 Elite Gen 5 QRD | Snapdragon® 8 Elite Gen 5 Mobile | ONNX | 19.693 ms | 16 - 200 MB | NPU | [FastSam-X.onnx.zip](https://huggingface.co/qualcomm/FastSam-X/blob/main/FastSam-X.onnx.zip) |
63
- | FastSam-X | float | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 43.37 ms | 5 - 5 MB | NPU | [FastSam-X.dlc](https://huggingface.co/qualcomm/FastSam-X/blob/main/FastSam-X.dlc) |
64
- | FastSam-X | float | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 47.029 ms | 139 - 139 MB | NPU | [FastSam-X.onnx.zip](https://huggingface.co/qualcomm/FastSam-X/blob/main/FastSam-X.onnx.zip) |
65
-
66
-
67
-
68
-
69
- ## Installation
70
-
71
-
72
- Install the package via pip:
73
- ```bash
74
- # NOTE: 3.10 <= PYTHON_VERSION < 3.14 is supported.
75
- pip install "qai-hub-models[fastsam-x]"
76
- ```
77
-
78
-
79
- ## Configure Qualcomm® AI Hub Workbench to run this model on a cloud-hosted device
80
-
81
- Sign-in to [Qualcomm® AI Hub Workbench](https://workbench.aihub.qualcomm.com/) with your
82
- Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
83
-
84
- With this API token, you can configure your client to run models on the cloud
85
- hosted devices.
86
- ```bash
87
- qai-hub configure --api_token API_TOKEN
88
- ```
89
- Navigate to [docs](https://workbench.aihub.qualcomm.com/docs/) for more information.
90
-
91
-
92
-
93
- ## Demo off target
94
-
95
- The package contains a simple end-to-end demo that downloads pre-trained
96
- weights and runs this model on a sample input.
97
-
98
- ```bash
99
- python -m qai_hub_models.models.fastsam_x.demo
100
- ```
101
-
102
- The above demo runs a reference implementation of pre-processing, model
103
- inference, and post processing.
104
-
105
- **NOTE**: If you want running in a Jupyter Notebook or Google Colab like
106
- environment, please add the following to your cell (instead of the above).
107
- ```
108
- %run -m qai_hub_models.models.fastsam_x.demo
109
- ```
110
-
111
-
112
- ### Run model on a cloud-hosted device
113
-
114
- In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
115
- device. This script does the following:
116
- * Performance check on-device on a cloud-hosted device
117
- * Downloads compiled assets that can be deployed on-device for Android.
118
- * Accuracy check between PyTorch and on-device outputs.
119
-
120
- ```bash
121
- python -m qai_hub_models.models.fastsam_x.export
122
- ```
123
-
124
-
125
-
126
- ## How does this work?
127
-
128
- This [export script](https://aihub.qualcomm.com/models/fastsam_x/qai_hub_models/models/FastSam-X/export.py)
129
- leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
130
- on-device. Lets go through each step below in detail:
131
-
132
- Step 1: **Compile model for on-device deployment**
133
-
134
- To compile a PyTorch model for on-device deployment, we first trace the model
135
- in memory using the `jit.trace` and then call the `submit_compile_job` API.
136
-
137
- ```python
138
- import torch
139
-
140
- import qai_hub as hub
141
- from qai_hub_models.models.fastsam_x import Model
142
-
143
- # Load the model
144
- torch_model = Model.from_pretrained()
145
-
146
- # Device
147
- device = hub.Device("Samsung Galaxy S25")
148
-
149
- # Trace model
150
- input_shape = torch_model.get_input_spec()
151
- sample_inputs = torch_model.sample_inputs()
152
-
153
- pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
154
-
155
- # Compile model on a specific device
156
- compile_job = hub.submit_compile_job(
157
- model=pt_model,
158
- device=device,
159
- input_specs=torch_model.get_input_spec(),
160
- )
161
-
162
- # Get target model to run on-device
163
- target_model = compile_job.get_target_model()
164
-
165
- ```
166
-
167
-
168
- Step 2: **Performance profiling on cloud-hosted device**
169
-
170
- After compiling models from step 1. Models can be profiled model on-device using the
171
- `target_model`. Note that this scripts runs the model on a device automatically
172
- provisioned in the cloud. Once the job is submitted, you can navigate to a
173
- provided job URL to view a variety of on-device performance metrics.
174
- ```python
175
- profile_job = hub.submit_profile_job(
176
- model=target_model,
177
- device=device,
178
- )
179
-
180
- ```
181
-
182
- Step 3: **Verify on-device accuracy**
183
-
184
- To verify the accuracy of the model on-device, you can run on-device inference
185
- on sample input data on the same cloud hosted device.
186
- ```python
187
- input_data = torch_model.sample_inputs()
188
- inference_job = hub.submit_inference_job(
189
- model=target_model,
190
- device=device,
191
- inputs=input_data,
192
- )
193
- on_device_output = inference_job.download_output_data()
194
-
195
- ```
196
- With the output of the model, you can compute like PSNR, relative errors or
197
- spot check the output with expected output.
198
-
199
- **Note**: This on-device profiling and inference requires access to Qualcomm®
200
- AI Hub Workbench. [Sign up for access](https://myaccount.qualcomm.com/signup).
201
-
202
-
203
-
204
- ## Run demo on a cloud-hosted device
205
-
206
- You can also run the demo on-device.
207
-
208
- ```bash
209
- python -m qai_hub_models.models.fastsam_x.demo --eval-mode on-device
210
- ```
211
-
212
- **NOTE**: If you want running in a Jupyter Notebook or Google Colab like
213
- environment, please add the following to your cell (instead of the above).
214
- ```
215
- %run -m qai_hub_models.models.fastsam_x.demo -- --eval-mode on-device
216
- ```
217
-
218
-
219
- ## Deploying compiled model to Android
220
-
221
-
222
- The models can be deployed using multiple runtimes:
223
- - TensorFlow Lite (`.tflite` export): [This
224
- tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
225
- guide to deploy the .tflite model in an Android application.
226
-
227
-
228
- - QNN (`.so` export ): This [sample
229
- app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
230
- provides instructions on how to use the `.so` shared library in an Android application.
231
-
232
-
233
- ## View on Qualcomm® AI Hub
234
- Get more details on FastSam-X's performance across various devices [here](https://aihub.qualcomm.com/models/fastsam_x).
235
- Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
236
-
237
 
238
  ## License
239
  * The license for the original implementation of FastSam-X can be found
240
  [here](https://github.com/CASIA-IVA-Lab/FastSAM/blob/main/LICENSE).
241
 
242
-
243
-
244
  ## References
245
  * [Fast Segment Anything](https://arxiv.org/abs/2306.12156)
246
  * [Source Model Implementation](https://github.com/CASIA-IVA-Lab/FastSAM)
247
 
248
-
249
-
250
  ## Community
251
  * Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
252
  * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com).
253
-
254
-
 
9
 
10
  ![](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/fastsam_x/web-assets/model_demo.png)
11
 
12
+ # FastSam-X: Optimized for Qualcomm Devices
 
 
13
 
14
  The Fast Segment Anything Model (FastSAM) is a novel, real-time CNN-based solution for the Segment Anything task. This task is designed to segment any object within an image based on various possible user interaction prompts. The model performs competitively despite significantly reduced computation, making it a practical choice for a variety of vision tasks.
15
 
16
+ This is based on the implementation of FastSam-X found [here](https://github.com/CASIA-IVA-Lab/FastSAM).
17
+ This repository contains pre-exported model files optimized for Qualcomm® devices. You can use the [Qualcomm® AI Hub Models](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/fastsam_x) library to export with custom configurations. More details on model performance across various devices, can be found [here](#performance-summary).
18
+
19
+ Qualcomm AI Hub Models uses [Qualcomm AI Hub Workbench](https://workbench.aihub.qualcomm.com) to compile, profile, and evaluate this model. [Sign up](https://myaccount.qualcomm.com/signup) to run these models on a hosted Qualcomm® device.
20
+
21
+ ## Getting Started
22
+ There are two ways to deploy this model on your device:
23
+
24
+ ### Option 1: Download Pre-Exported Models
25
+
26
+ Below are pre-exported model assets ready for deployment.
27
+
28
+ | Runtime | Precision | Chipset | SDK Versions | Download |
29
+ |---|---|---|---|---|
30
+ | ONNX | float | Universal | QAIRT 2.37, ONNX Runtime 1.23.0 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/fastsam_x/releases/v0.46.1/fastsam_x-onnx-float.zip)
31
+ | QNN_DLC | float | Universal | QAIRT 2.42 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/fastsam_x/releases/v0.46.1/fastsam_x-qnn_dlc-float.zip)
32
+ | TFLITE | float | Universal | QAIRT 2.42, TFLite 2.17.0 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/fastsam_x/releases/v0.46.1/fastsam_x-tflite-float.zip)
33
+
34
+ For more device-specific assets and performance metrics, visit **[FastSam-X on Qualcomm® AI Hub](https://aihub.qualcomm.com/models/fastsam_x)**.
35
+
36
+
37
+ ### Option 2: Export with Custom Configurations
38
+
39
+ Use the [Qualcomm® AI Hub Models](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/fastsam_x) Python library to compile and export the model with your own:
40
+ - Custom weights (e.g., fine-tuned checkpoints)
41
+ - Custom input shapes
42
+ - Target device and runtime configurations
43
+
44
+ This option is ideal if you need to customize the model beyond the default configuration provided here.
45
+
46
+ See our repository for [FastSam-X on GitHub](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/fastsam_x) for usage instructions.
47
+
48
+ ## Model Details
49
+
50
+ **Model Type:** Model_use_case.semantic_segmentation
51
+
52
+ **Model Stats:**
53
+ - Model checkpoint: fastsam-x.pt
54
+ - Inference latency: RealTime
55
+ - Input resolution: 640x640
56
+ - Number of parameters: 72.2M
57
+ - Model size (float): 276 MB
58
+
59
+ ## Performance Summary
60
+ | Model | Runtime | Precision | Chipset | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit
61
+ |---|---|---|---|---|---|---
62
+ | FastSam-X | ONNX | float | Snapdragon® X Elite | 46.486 ms | 139 - 139 MB | NPU
63
+ | FastSam-X | ONNX | float | Snapdragon® 8 Gen 3 Mobile | 36.497 ms | 4 - 267 MB | NPU
64
+ | FastSam-X | ONNX | float | Qualcomm® QCS8550 (Proxy) | 46.077 ms | 11 - 14 MB | NPU
65
+ | FastSam-X | ONNX | float | Qualcomm® QCS9075 | 73.748 ms | 11 - 19 MB | NPU
66
+ | FastSam-X | ONNX | float | Snapdragon® 8 Elite For Galaxy Mobile | 27.37 ms | 12 - 187 MB | NPU
67
+ | FastSam-X | ONNX | float | Snapdragon® 8 Elite Gen 5 Mobile | 18.362 ms | 1 - 183 MB | NPU
68
+ | FastSam-X | QNN_DLC | float | Snapdragon® X Elite | 43.841 ms | 5 - 5 MB | NPU
69
+ | FastSam-X | QNN_DLC | float | Snapdragon® 8 Gen 3 Mobile | 32.661 ms | 3 - 314 MB | NPU
70
+ | FastSam-X | QNN_DLC | float | Qualcomm® QCS8275 (Proxy) | 279.828 ms | 2 - 223 MB | NPU
71
+ | FastSam-X | QNN_DLC | float | Qualcomm® QCS8550 (Proxy) | 43.11 ms | 5 - 7 MB | NPU
72
+ | FastSam-X | QNN_DLC | float | Qualcomm® SA8775P | 68.478 ms | 0 - 222 MB | NPU
73
+ | FastSam-X | QNN_DLC | float | Qualcomm® QCS9075 | 70.434 ms | 7 - 17 MB | NPU
74
+ | FastSam-X | QNN_DLC | float | Qualcomm® QCS8450 (Proxy) | 93.211 ms | 2 - 392 MB | NPU
75
+ | FastSam-X | QNN_DLC | float | Qualcomm® SA7255P | 279.828 ms | 2 - 223 MB | NPU
76
+ | FastSam-X | QNN_DLC | float | Qualcomm® SA8295P | 77.966 ms | 0 - 296 MB | NPU
77
+ | FastSam-X | QNN_DLC | float | Snapdragon® 8 Elite For Galaxy Mobile | 25.29 ms | 0 - 222 MB | NPU
78
+ | FastSam-X | QNN_DLC | float | Snapdragon® 8 Elite Gen 5 Mobile | 17.889 ms | 5 - 242 MB | NPU
79
+ | FastSam-X | TFLITE | float | Snapdragon® 8 Gen 3 Mobile | 32.534 ms | 3 - 443 MB | NPU
80
+ | FastSam-X | TFLITE | float | Qualcomm® QCS8275 (Proxy) | 279.179 ms | 4 - 269 MB | NPU
81
+ | FastSam-X | TFLITE | float | Qualcomm® QCS8550 (Proxy) | 42.096 ms | 4 - 43 MB | NPU
82
+ | FastSam-X | TFLITE | float | Qualcomm® SA8775P | 68.042 ms | 4 - 269 MB | NPU
83
+ | FastSam-X | TFLITE | float | Qualcomm® QCS9075 | 70.216 ms | 4 - 158 MB | NPU
84
+ | FastSam-X | TFLITE | float | Qualcomm® QCS8450 (Proxy) | 92.525 ms | 5 - 525 MB | NPU
85
+ | FastSam-X | TFLITE | float | Qualcomm® SA7255P | 279.179 ms | 4 - 269 MB | NPU
86
+ | FastSam-X | TFLITE | float | Qualcomm® SA8295P | 77.396 ms | 4 - 343 MB | NPU
87
+ | FastSam-X | TFLITE | float | Snapdragon® 8 Elite For Galaxy Mobile | 25.087 ms | 4 - 271 MB | NPU
88
+ | FastSam-X | TFLITE | float | Snapdragon® 8 Elite Gen 5 Mobile | 17.21 ms | 4 - 276 MB | NPU
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
89
 
90
  ## License
91
  * The license for the original implementation of FastSam-X can be found
92
  [here](https://github.com/CASIA-IVA-Lab/FastSAM/blob/main/LICENSE).
93
 
 
 
94
  ## References
95
  * [Fast Segment Anything](https://arxiv.org/abs/2306.12156)
96
  * [Source Model Implementation](https://github.com/CASIA-IVA-Lab/FastSAM)
97
 
 
 
98
  ## Community
99
  * Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
100
  * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com).
 
 
tool-versions.yaml DELETED
@@ -1,4 +0,0 @@
1
- tool_versions:
2
- onnx:
3
- qairt: 2.37.1.250807093845_124904
4
- onnx_runtime: 1.23.0