|
--- |
|
license: apache-2.0 |
|
--- |
|
<img src="https://cdn-uploads.huggingface.co/production/uploads/64c1fef5b9d81735a12c3fcc/OclZQ_XWvTiVm0kOTwXjy.png"> |
|
|
|
# VGG19: Image Classification |
|
|
|
VGGNet is a deep convolutional neural network developed by researchers from Oxford University's Visual Geometry Group and Google DeepMind. It explores the relationship between the depth of a convolutional neural network and its performance. By repeatedly stacking 3*3 A small convolution kernel and a 2*2 maximum pooling layer have successfully constructed a 16-19 layer deep convolutional neural network. Compared with the previous state-of-the-art network structure, the error rate of VGGNet is greatly reduced. In the VGGNet paper, all the small convolution kernels of 3*3 and the largest pooling kernel of 2*2 are used to continuously deepen the network structure. Improve performance. |
|
|
|
VGG19 contains 19 hidden layers (16 convolutional layers and 3 fully connected layers) |
|
|
|
The model can be found [here](https://keras.io/api/applications/vgg/) |
|
|
|
## CONTENTS |
|
- [Performance](#performance) |
|
- [Model Conversion](#model-conversion) |
|
- [Inference](#inference) |
|
|
|
## Performance |
|
|
|
|Device|SoC|Runtime|Model|Size (pixels)|Inference Time (ms)|Precision|Compute Unit|Model Download| |
|
|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:| |
|
|AidBox QCS6490|QCS6490|QNN|VGG19|224|20.3|INT8|NPU|[model download](https://huggingface.co/aidlux/VGG19/blob/main/Models/QCS6490/vgg19_int8.qnn.serialized.bin)| |
|
|AidBox QCS6490|QCS6490|QNN|VGG19|224|-|INT16|NPU|[model download](https://huggingface.co/aidlux/VGG19/blob/main/Models/QCS6490/vgg19_int16.qnn.serialized.bin)| |
|
|AidBox QCS6490|QCS6490|SNPE|VGG19|224|18.4|INT8|NPU|[model download](https://huggingface.co/aidlux/VGG19/blob/main/Models/QCS6490/vgg19_int8_htp_snpe2.dlc)| |
|
|AidBox QCS6490|QCS6490|SNPE|VGG19|224|-|INT16|NPU|[model download](https://huggingface.co/aidlux/VGG19/blob/main/Models/QCS6490/vgg19_int16_htp_snpe2.dlc)| |
|
|APLUX QCS8550|QCS8550|QNN|VGG19|224|6.6|INT8|NPU|[model download](https://huggingface.co/aidlux/VGG19/blob/main/Models/QCS8550/vgg19_int8.qnn.serialized.bin)| |
|
|APLUX QCS8550|QCS8550|QNN|VGG19|224|11.1|INT16|NPU|[model download](https://huggingface.co/aidlux/VGG19/blob/main/Models/QCS8550/vgg19_int16.qnn.serialized.bin)| |
|
|APLUX QCS8550|QCS8550|SNPE|VGG19|224|4.2|INT8|NPU|[model download](https://huggingface.co/aidlux/VGG19/blob/main/Models/QCS8550/vgg19_int8_htp_snpe2.dlc)| |
|
|APLUX QCS8550|QCS8550|SNPE|VGG19|224|5.6|INT16|NPU|[model download](https://huggingface.co/aidlux/VGG19/blob/main/Models/QCS8550/vgg19_int16_htp_snpe2.dlc)| |
|
|AidBox GS865|QCS8250|SNPE|VGG19|224|-|INT8|NPU|[model download]()| |
|
|
|
## Model Conversion |
|
|
|
Demo models converted from [**AIMO(AI Model Optimizier)**](https://aidlux.com/en/product/aimo). |
|
|
|
The source model **YOLOv5s.onnx** can be found [here](https://huggingface.co/aplux/YOLOv5/blob/main/yolov5s.onnx). |
|
|
|
The demo model conversion step on AIMO can be found blow: |
|
|
|
|Device|SoC|Runtime|Model|Size (pixels)|Precision|Compute Unit|AIMO Conversion Steps| |
|
|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:| |
|
|AidBox QCS6490|QCS6490|QNN|VGG19|640|INT8|NPU|[View Steps](https://huggingface.co/aplux/VGG19/blob/main/AIMO/QCS6490/aimo_vgg19_qnn_int8.png)| |
|
|AidBox QCS6490|QCS6490|QNN|VGG19|640|INT16|NPU|[View Steps](https://huggingface.co/aplux/VGG19/blob/main/AIMO/QCS6490/aimo_vgg19_qnn_int16.png)| |
|
|AidBox QCS6490|QCS6490|SNPE|VGG19|640|INT8|NPU|[View Steps](https://huggingface.co/aplux/VGG19/blob/main/AIMO/QCS6490/aimo_vgg19_snpe_int8.png)| |
|
|AidBox QCS6490|QCS6490|SNPE|VGG19|640|INT16|NPU|[View Steps](https://huggingface.co/aplux/VGG19/blob/main/AIMO/QCS6490/aimo_vgg19_snpe_int16.png)| |
|
|APLUX QCS8550|QCS8550|QNN|VGG19|640|INT8|NPU|[View Steps](https://huggingface.co/aplux/VGG19/blob/main/AIMO/QCS8550/aimo_vgg19_qnn_int8.png)| |
|
|APLUX QCS8550|QCS8550|QNN|VGG19|640|INT16|NPU|[View Steps](https://huggingface.co/aplux/VGG19/blob/main/AIMO/QCS8550/aimo_vgg19_qnn_int16.png)| |
|
|APLUX QCS8550|QCS8550|SNPE|VGG19|640|INT8|NPU|[View Steps](https://huggingface.co/aplux/VGG19/blob/main/AIMO/QCS8550/aimo_vgg19_snpe_int8.png)| |
|
|APLUX QCS8550|QCS8550|SNPE|VGG19|640|INT16|NPU|[View Steps](https://huggingface.co/aplux/VGG19/blob/main/AIMO/QCS8550/aimo_vgg19_snpe_int16.png)| |
|
|AidBox GS865|QCS8250|SNPE|VGG19|640|INT8|NPU|[View Steps]()| |
|
|
|
## Inference |
|
|
|
### Step1: convert model |
|
|
|
a. Prepare source model in onnx format. The source model can be found [here](https://huggingface.co/aplux/VGG19/blob/main/vgg19.onnx). |
|
|
|
b. Login [AIMO](https://aidlux.com/en/product/aimo) and convert source model to target format. The model conversion step can follow **AIMO Conversion Step** in [Model Conversion Sheet](#model-conversion). |
|
|
|
c. After conversion task done, download target model file. |
|
|
|
### Step2: install AidLite SDK |
|
|
|
The installation guide of AidLite SDK can be found [here](https://huggingface.co/datasets/aplux/AIToolKit/blob/main/AidLite%20SDK%20Development%20Documents.md#installation). |
|
|
|
### Step3: run demo program |