v0.49.1
Browse filesSee https://github.com/qualcomm/ai-hub-models/releases/v0.49.1 for changelog.
README.md
CHANGED
|
@@ -15,7 +15,7 @@ pipeline_tag: image-classification
|
|
| 15 |
ConvNextBase is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases.
|
| 16 |
|
| 17 |
This is based on the implementation of ConvNext-Base found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/convnext.py).
|
| 18 |
-
This repository contains pre-exported model files optimized for Qualcomm® devices. You can use the [Qualcomm® AI Hub Models](https://github.com/qualcomm/ai-hub-models/
|
| 19 |
|
| 20 |
Qualcomm AI Hub Models uses [Qualcomm AI Hub Workbench](https://workbench.aihub.qualcomm.com) to compile, profile, and evaluate this model. [Sign up](https://myaccount.qualcomm.com/signup) to run these models on a hosted Qualcomm® device.
|
| 21 |
|
|
@@ -28,25 +28,25 @@ Below are pre-exported model assets ready for deployment.
|
|
| 28 |
|
| 29 |
| Runtime | Precision | Chipset | SDK Versions | Download |
|
| 30 |
|---|---|---|---|---|
|
| 31 |
-
| ONNX | float | Universal | QAIRT 2.42, ONNX Runtime 1.24.1 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/convnext_base/releases/v0.
|
| 32 |
-
| ONNX | w8a16 | Universal | QAIRT 2.42, ONNX Runtime 1.24.1 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/convnext_base/releases/v0.
|
| 33 |
-
| QNN_DLC | float | Universal | QAIRT 2.43 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/convnext_base/releases/v0.
|
| 34 |
-
| QNN_DLC | w8a16 | Universal | QAIRT 2.43 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/convnext_base/releases/v0.
|
| 35 |
-
| TFLITE | float | Universal | QAIRT 2.43, TFLite 2.17.0 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/convnext_base/releases/v0.
|
| 36 |
|
| 37 |
For more device-specific assets and performance metrics, visit **[ConvNext-Base on Qualcomm® AI Hub](https://aihub.qualcomm.com/models/convnext_base)**.
|
| 38 |
|
| 39 |
|
| 40 |
### Option 2: Export with Custom Configurations
|
| 41 |
|
| 42 |
-
Use the [Qualcomm® AI Hub Models](https://github.com/qualcomm/ai-hub-models/
|
| 43 |
- Custom weights (e.g., fine-tuned checkpoints)
|
| 44 |
- Custom input shapes
|
| 45 |
- Target device and runtime configurations
|
| 46 |
|
| 47 |
This option is ideal if you need to customize the model beyond the default configuration provided here.
|
| 48 |
|
| 49 |
-
See our repository for [ConvNext-Base on GitHub](https://github.com/qualcomm/ai-hub-models/
|
| 50 |
|
| 51 |
## Model Details
|
| 52 |
|
|
@@ -62,51 +62,51 @@ See our repository for [ConvNext-Base on GitHub](https://github.com/qualcomm/ai-
|
|
| 62 |
## Performance Summary
|
| 63 |
| Model | Runtime | Precision | Chipset | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit
|
| 64 |
|---|---|---|---|---|---|---
|
| 65 |
-
| ConvNext-Base | ONNX | float | Snapdragon®
|
| 66 |
-
| ConvNext-Base | ONNX | float | Snapdragon®
|
| 67 |
-
| ConvNext-Base | ONNX | float | Snapdragon®
|
| 68 |
-
| ConvNext-Base | ONNX | float |
|
| 69 |
-
| ConvNext-Base | ONNX | float | Qualcomm®
|
| 70 |
-
| ConvNext-Base | ONNX | float |
|
| 71 |
-
| ConvNext-Base | ONNX | float | Snapdragon® 8 Elite
|
| 72 |
-
| ConvNext-Base | ONNX | w8a16 | Snapdragon®
|
| 73 |
-
| ConvNext-Base | ONNX | w8a16 | Snapdragon®
|
| 74 |
-
| ConvNext-Base | ONNX | w8a16 | Snapdragon®
|
| 75 |
-
| ConvNext-Base | ONNX | w8a16 |
|
| 76 |
-
| ConvNext-Base | ONNX | w8a16 | Qualcomm®
|
| 77 |
-
| ConvNext-Base | ONNX | w8a16 | Qualcomm®
|
| 78 |
-
| ConvNext-Base | ONNX | w8a16 | Qualcomm®
|
| 79 |
-
| ConvNext-Base | ONNX | w8a16 |
|
| 80 |
-
| ConvNext-Base | ONNX | w8a16 | Snapdragon®
|
| 81 |
-
| ConvNext-Base | ONNX | w8a16 | Snapdragon®
|
| 82 |
-
| ConvNext-Base | QNN_DLC | float | Snapdragon®
|
| 83 |
-
| ConvNext-Base | QNN_DLC | float | Snapdragon®
|
| 84 |
-
| ConvNext-Base | QNN_DLC | float | Snapdragon®
|
| 85 |
-
| ConvNext-Base | QNN_DLC | float |
|
| 86 |
-
| ConvNext-Base | QNN_DLC | float | Qualcomm®
|
| 87 |
-
| ConvNext-Base | QNN_DLC | float | Qualcomm®
|
| 88 |
-
| ConvNext-Base | QNN_DLC | float | Qualcomm®
|
| 89 |
-
| ConvNext-Base | QNN_DLC | float |
|
| 90 |
-
| ConvNext-Base | QNN_DLC | float | Snapdragon® 8 Elite
|
| 91 |
-
| ConvNext-Base | QNN_DLC | w8a16 | Snapdragon®
|
| 92 |
-
| ConvNext-Base | QNN_DLC | w8a16 | Snapdragon®
|
| 93 |
-
| ConvNext-Base | QNN_DLC | w8a16 | Snapdragon®
|
| 94 |
-
| ConvNext-Base | QNN_DLC | w8a16 |
|
| 95 |
-
| ConvNext-Base | QNN_DLC | w8a16 | Qualcomm®
|
| 96 |
-
| ConvNext-Base | QNN_DLC | w8a16 | Qualcomm®
|
| 97 |
-
| ConvNext-Base | QNN_DLC | w8a16 | Qualcomm®
|
| 98 |
-
| ConvNext-Base | QNN_DLC | w8a16 | Qualcomm®
|
| 99 |
-
| ConvNext-Base | QNN_DLC | w8a16 | Qualcomm®
|
| 100 |
-
| ConvNext-Base | QNN_DLC | w8a16 |
|
| 101 |
-
| ConvNext-Base | QNN_DLC | w8a16 | Snapdragon®
|
| 102 |
-
| ConvNext-Base | QNN_DLC | w8a16 | Snapdragon®
|
| 103 |
-
| ConvNext-Base | TFLITE | float | Snapdragon® 8 Gen
|
| 104 |
-
| ConvNext-Base | TFLITE | float |
|
| 105 |
-
| ConvNext-Base | TFLITE | float | Qualcomm®
|
| 106 |
-
| ConvNext-Base | TFLITE | float | Qualcomm®
|
| 107 |
-
| ConvNext-Base | TFLITE | float | Qualcomm®
|
| 108 |
-
| ConvNext-Base | TFLITE | float |
|
| 109 |
-
| ConvNext-Base | TFLITE | float | Snapdragon® 8 Elite
|
| 110 |
|
| 111 |
## License
|
| 112 |
* The license for the original implementation of ConvNext-Base can be found
|
|
|
|
| 15 |
ConvNextBase is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases.
|
| 16 |
|
| 17 |
This is based on the implementation of ConvNext-Base found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/convnext.py).
|
| 18 |
+
This repository contains pre-exported model files optimized for Qualcomm® devices. You can use the [Qualcomm® AI Hub Models](https://github.com/qualcomm/ai-hub-models/tree/v0.49.1/qai_hub_models/models/convnext_base) library to export with custom configurations. More details on model performance across various devices, can be found [here](#performance-summary).
|
| 19 |
|
| 20 |
Qualcomm AI Hub Models uses [Qualcomm AI Hub Workbench](https://workbench.aihub.qualcomm.com) to compile, profile, and evaluate this model. [Sign up](https://myaccount.qualcomm.com/signup) to run these models on a hosted Qualcomm® device.
|
| 21 |
|
|
|
|
| 28 |
|
| 29 |
| Runtime | Precision | Chipset | SDK Versions | Download |
|
| 30 |
|---|---|---|---|---|
|
| 31 |
+
| ONNX | float | Universal | QAIRT 2.42, ONNX Runtime 1.24.1 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/convnext_base/releases/v0.49.1/convnext_base-onnx-float.zip)
|
| 32 |
+
| ONNX | w8a16 | Universal | QAIRT 2.42, ONNX Runtime 1.24.1 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/convnext_base/releases/v0.49.1/convnext_base-onnx-w8a16.zip)
|
| 33 |
+
| QNN_DLC | float | Universal | QAIRT 2.43 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/convnext_base/releases/v0.49.1/convnext_base-qnn_dlc-float.zip)
|
| 34 |
+
| QNN_DLC | w8a16 | Universal | QAIRT 2.43 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/convnext_base/releases/v0.49.1/convnext_base-qnn_dlc-w8a16.zip)
|
| 35 |
+
| TFLITE | float | Universal | QAIRT 2.43, TFLite 2.17.0 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/convnext_base/releases/v0.49.1/convnext_base-tflite-float.zip)
|
| 36 |
|
| 37 |
For more device-specific assets and performance metrics, visit **[ConvNext-Base on Qualcomm® AI Hub](https://aihub.qualcomm.com/models/convnext_base)**.
|
| 38 |
|
| 39 |
|
| 40 |
### Option 2: Export with Custom Configurations
|
| 41 |
|
| 42 |
+
Use the [Qualcomm® AI Hub Models](https://github.com/qualcomm/ai-hub-models/tree/v0.49.1/qai_hub_models/models/convnext_base) Python library to compile and export the model with your own:
|
| 43 |
- Custom weights (e.g., fine-tuned checkpoints)
|
| 44 |
- Custom input shapes
|
| 45 |
- Target device and runtime configurations
|
| 46 |
|
| 47 |
This option is ideal if you need to customize the model beyond the default configuration provided here.
|
| 48 |
|
| 49 |
+
See our repository for [ConvNext-Base on GitHub](https://github.com/qualcomm/ai-hub-models/tree/v0.49.1/qai_hub_models/models/convnext_base) for usage instructions.
|
| 50 |
|
| 51 |
## Model Details
|
| 52 |
|
|
|
|
| 62 |
## Performance Summary
|
| 63 |
| Model | Runtime | Precision | Chipset | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit
|
| 64 |
|---|---|---|---|---|---|---
|
| 65 |
+
| ConvNext-Base | ONNX | float | Snapdragon® 8 Elite Gen 5 Mobile | 3.161 ms | 1 - 285 MB | NPU
|
| 66 |
+
| ConvNext-Base | ONNX | float | Snapdragon® X2 Elite | 3.536 ms | 176 - 176 MB | NPU
|
| 67 |
+
| ConvNext-Base | ONNX | float | Snapdragon® X Elite | 7.488 ms | 175 - 175 MB | NPU
|
| 68 |
+
| ConvNext-Base | ONNX | float | Snapdragon® 8 Gen 3 Mobile | 5.314 ms | 0 - 350 MB | NPU
|
| 69 |
+
| ConvNext-Base | ONNX | float | Qualcomm® QCS8550 (Proxy) | 7.155 ms | 0 - 195 MB | NPU
|
| 70 |
+
| ConvNext-Base | ONNX | float | Qualcomm® QCS9075 | 11.075 ms | 0 - 4 MB | NPU
|
| 71 |
+
| ConvNext-Base | ONNX | float | Snapdragon® 8 Elite For Galaxy Mobile | 4.137 ms | 0 - 283 MB | NPU
|
| 72 |
+
| ConvNext-Base | ONNX | w8a16 | Snapdragon® 8 Elite Gen 5 Mobile | 2.589 ms | 0 - 224 MB | NPU
|
| 73 |
+
| ConvNext-Base | ONNX | w8a16 | Snapdragon® X2 Elite | 2.77 ms | 90 - 90 MB | NPU
|
| 74 |
+
| ConvNext-Base | ONNX | w8a16 | Snapdragon® X Elite | 6.449 ms | 90 - 90 MB | NPU
|
| 75 |
+
| ConvNext-Base | ONNX | w8a16 | Snapdragon® 8 Gen 3 Mobile | 4.373 ms | 0 - 269 MB | NPU
|
| 76 |
+
| ConvNext-Base | ONNX | w8a16 | Qualcomm® QCS6490 | 1091.433 ms | 32 - 63 MB | CPU
|
| 77 |
+
| ConvNext-Base | ONNX | w8a16 | Qualcomm® QCS8550 (Proxy) | 6.193 ms | 0 - 103 MB | NPU
|
| 78 |
+
| ConvNext-Base | ONNX | w8a16 | Qualcomm® QCS9075 | 5.896 ms | 0 - 3 MB | NPU
|
| 79 |
+
| ConvNext-Base | ONNX | w8a16 | Qualcomm® QCM6690 | 634.065 ms | 43 - 57 MB | CPU
|
| 80 |
+
| ConvNext-Base | ONNX | w8a16 | Snapdragon® 8 Elite For Galaxy Mobile | 3.212 ms | 0 - 210 MB | NPU
|
| 81 |
+
| ConvNext-Base | ONNX | w8a16 | Snapdragon® 7 Gen 4 Mobile | 607.014 ms | 68 - 83 MB | CPU
|
| 82 |
+
| ConvNext-Base | QNN_DLC | float | Snapdragon® 8 Elite Gen 5 Mobile | 3.542 ms | 1 - 283 MB | NPU
|
| 83 |
+
| ConvNext-Base | QNN_DLC | float | Snapdragon® X2 Elite | 4.418 ms | 1 - 1 MB | NPU
|
| 84 |
+
| ConvNext-Base | QNN_DLC | float | Snapdragon® X Elite | 8.601 ms | 1 - 1 MB | NPU
|
| 85 |
+
| ConvNext-Base | QNN_DLC | float | Snapdragon® 8 Gen 3 Mobile | 6.034 ms | 0 - 349 MB | NPU
|
| 86 |
+
| ConvNext-Base | QNN_DLC | float | Qualcomm® QCS8275 (Proxy) | 42.216 ms | 1 - 280 MB | NPU
|
| 87 |
+
| ConvNext-Base | QNN_DLC | float | Qualcomm® QCS8550 (Proxy) | 8.222 ms | 1 - 3 MB | NPU
|
| 88 |
+
| ConvNext-Base | QNN_DLC | float | Qualcomm® QCS9075 | 12.066 ms | 1 - 3 MB | NPU
|
| 89 |
+
| ConvNext-Base | QNN_DLC | float | Qualcomm® QCS8450 (Proxy) | 20.621 ms | 0 - 336 MB | NPU
|
| 90 |
+
| ConvNext-Base | QNN_DLC | float | Snapdragon® 8 Elite For Galaxy Mobile | 4.644 ms | 1 - 281 MB | NPU
|
| 91 |
+
| ConvNext-Base | QNN_DLC | w8a16 | Snapdragon® 8 Elite Gen 5 Mobile | 2.522 ms | 0 - 200 MB | NPU
|
| 92 |
+
| ConvNext-Base | QNN_DLC | w8a16 | Snapdragon® X2 Elite | 3.025 ms | 0 - 0 MB | NPU
|
| 93 |
+
| ConvNext-Base | QNN_DLC | w8a16 | Snapdragon® X Elite | 6.289 ms | 0 - 0 MB | NPU
|
| 94 |
+
| ConvNext-Base | QNN_DLC | w8a16 | Snapdragon® 8 Gen 3 Mobile | 4.091 ms | 0 - 248 MB | NPU
|
| 95 |
+
| ConvNext-Base | QNN_DLC | w8a16 | Qualcomm® QCS6490 | 23.862 ms | 2 - 4 MB | NPU
|
| 96 |
+
| ConvNext-Base | QNN_DLC | w8a16 | Qualcomm® QCS8275 (Proxy) | 14.601 ms | 0 - 198 MB | NPU
|
| 97 |
+
| ConvNext-Base | QNN_DLC | w8a16 | Qualcomm® QCS8550 (Proxy) | 5.924 ms | 0 - 3 MB | NPU
|
| 98 |
+
| ConvNext-Base | QNN_DLC | w8a16 | Qualcomm® QCS9075 | 6.132 ms | 0 - 2 MB | NPU
|
| 99 |
+
| ConvNext-Base | QNN_DLC | w8a16 | Qualcomm® QCM6690 | 61.117 ms | 0 - 394 MB | NPU
|
| 100 |
+
| ConvNext-Base | QNN_DLC | w8a16 | Qualcomm® QCS8450 (Proxy) | 8.985 ms | 0 - 245 MB | NPU
|
| 101 |
+
| ConvNext-Base | QNN_DLC | w8a16 | Snapdragon® 8 Elite For Galaxy Mobile | 3.277 ms | 0 - 189 MB | NPU
|
| 102 |
+
| ConvNext-Base | QNN_DLC | w8a16 | Snapdragon® 7 Gen 4 Mobile | 7.775 ms | 0 - 248 MB | NPU
|
| 103 |
+
| ConvNext-Base | TFLITE | float | Snapdragon® 8 Elite Gen 5 Mobile | 3.156 ms | 0 - 278 MB | NPU
|
| 104 |
+
| ConvNext-Base | TFLITE | float | Snapdragon® 8 Gen 3 Mobile | 5.442 ms | 0 - 346 MB | NPU
|
| 105 |
+
| ConvNext-Base | TFLITE | float | Qualcomm® QCS8275 (Proxy) | 40.92 ms | 0 - 273 MB | NPU
|
| 106 |
+
| ConvNext-Base | TFLITE | float | Qualcomm® QCS8550 (Proxy) | 7.243 ms | 0 - 2 MB | NPU
|
| 107 |
+
| ConvNext-Base | TFLITE | float | Qualcomm® QCS9075 | 11.514 ms | 0 - 177 MB | NPU
|
| 108 |
+
| ConvNext-Base | TFLITE | float | Qualcomm® QCS8450 (Proxy) | 19.705 ms | 0 - 331 MB | NPU
|
| 109 |
+
| ConvNext-Base | TFLITE | float | Snapdragon® 8 Elite For Galaxy Mobile | 4.13 ms | 0 - 277 MB | NPU
|
| 110 |
|
| 111 |
## License
|
| 112 |
* The license for the original implementation of ConvNext-Base can be found
|