BGNet: Optimized for Mobile Deployment
Segment images in real-time on device
BGNet or Boundary-Guided Network, is a model designed for camouflaged object detection. It leverages edge semantics to enhance the representation learning process, making it more effective at identifying objects that blend into their surroundings
This model is an implementation of BGNet found here.
More details on model performance across various devices, can be found here.
Model Details
- Model Type: Semantic segmentation
- Model Stats:
- Model checkpoint: BGNet
- Input resolution: 416x416
- Number of parameters: 77.8M
- Model size: 297 MB
Model | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model |
---|---|---|---|---|---|---|---|---|
BGNet | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | TFLITE | 22.861 ms | 1 - 19 MB | FP16 | NPU | -- |
BGNet | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | QNN | 24.29 ms | 2 - 18 MB | FP16 | NPU | -- |
BGNet | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | ONNX | 20.576 ms | 0 - 334 MB | FP16 | NPU | -- |
BGNet | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | TFLITE | 16.997 ms | 0 - 233 MB | FP16 | NPU | -- |
BGNet | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | QNN | 17.811 ms | 218 - 293 MB | FP16 | NPU | -- |
BGNet | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | ONNX | 14.894 ms | 2 - 79 MB | FP16 | NPU | -- |
BGNet | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | TFLITE | 15.453 ms | 1 - 127 MB | FP16 | NPU | -- |
BGNet | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | QNN | 16.676 ms | 2 - 69 MB | FP16 | NPU | -- |
BGNet | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | ONNX | 15.82 ms | 3 - 66 MB | FP16 | NPU | -- |
BGNet | SA7255P ADP | SA7255P | TFLITE | 855.371 ms | 1 - 126 MB | FP16 | NPU | -- |
BGNet | SA7255P ADP | SA7255P | QNN | 871.97 ms | 2 - 12 MB | FP16 | NPU | -- |
BGNet | SA8255 (Proxy) | SA8255P Proxy | TFLITE | 22.922 ms | 1 - 20 MB | FP16 | NPU | -- |
BGNet | SA8255 (Proxy) | SA8255P Proxy | QNN | 20.043 ms | 2 - 4 MB | FP16 | NPU | -- |
BGNet | SA8295P ADP | SA8295P | TFLITE | 37.962 ms | 1 - 99 MB | FP16 | NPU | -- |
BGNet | SA8295P ADP | SA8295P | QNN | 34.723 ms | 2 - 19 MB | FP16 | NPU | -- |
BGNet | SA8650 (Proxy) | SA8650P Proxy | TFLITE | 22.977 ms | 0 - 19 MB | FP16 | NPU | -- |
BGNet | SA8650 (Proxy) | SA8650P Proxy | QNN | 19.884 ms | 0 - 2 MB | FP16 | NPU | -- |
BGNet | SA8775P ADP | SA8775P | TFLITE | 42.234 ms | 1 - 126 MB | FP16 | NPU | -- |
BGNet | SA8775P ADP | SA8775P | QNN | 39.843 ms | 2 - 12 MB | FP16 | NPU | -- |
BGNet | QCS8275 (Proxy) | QCS8275 Proxy | TFLITE | 855.371 ms | 1 - 126 MB | FP16 | NPU | -- |
BGNet | QCS8275 (Proxy) | QCS8275 Proxy | QNN | 871.97 ms | 2 - 12 MB | FP16 | NPU | -- |
BGNet | QCS8550 (Proxy) | QCS8550 Proxy | TFLITE | 22.842 ms | 0 - 19 MB | FP16 | NPU | -- |
BGNet | QCS8550 (Proxy) | QCS8550 Proxy | QNN | 19.844 ms | 2 - 4 MB | FP16 | NPU | -- |
BGNet | QCS9075 (Proxy) | QCS9075 Proxy | TFLITE | 42.234 ms | 1 - 126 MB | FP16 | NPU | -- |
BGNet | QCS9075 (Proxy) | QCS9075 Proxy | QNN | 39.843 ms | 2 - 12 MB | FP16 | NPU | -- |
BGNet | QCS8450 (Proxy) | QCS8450 Proxy | TFLITE | 33.789 ms | 1 - 213 MB | FP16 | NPU | -- |
BGNet | QCS8450 (Proxy) | QCS8450 Proxy | QNN | 36.41 ms | 0 - 49 MB | FP16 | NPU | -- |
BGNet | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN | 20.351 ms | 2 - 2 MB | FP16 | NPU | -- |
BGNet | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 22.249 ms | 154 - 154 MB | FP16 | NPU | -- |
License
- The license for the original implementation of BGNet can be found [here](This model's original implementation does not provide a LICENSE.).
- The license for the compiled assets for on-device deployment can be found here
References
Community
- Join our AI Hub Slack community to collaborate, post questions and learn more about on-device AI.
- For questions or feedback please reach out to us.
Usage and Limitations
Model may not be used for or in connection with any of the following applications:
- Accessing essential private and public services and benefits;
- Administration of justice and democratic processes;
- Assessing or recognizing the emotional state of a person;
- Biometric and biometrics-based systems, including categorization of persons based on sensitive characteristics;
- Education and vocational training;
- Employment and workers management;
- Exploitation of the vulnerabilities of persons resulting in harmful behavior;
- General purpose social scoring;
- Law enforcement;
- Management and operation of critical infrastructure;
- Migration, asylum and border control management;
- Predictive policing;
- Real-time remote biometric identification in public spaces;
- Recommender systems of social media platforms;
- Scraping of facial images (from the internet or otherwise); and/or
- Subliminal manipulation
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
HF Inference deployability: The HF Inference API does not support image-segmentation models for pytorch
library.