File size: 5,006 Bytes
e5c35bc 02efb48 5c4f790 e5c35bc 02efb48 f29bac4 e5c35bc 02efb48 e5c35bc 02efb48 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 |
---
license: apache-2.0
---
<img src="https://cdn-uploads.huggingface.co/production/uploads/64c1fef5b9d81735a12c3fcc/OclZQ_XWvTiVm0kOTwXjy.png">
# VGG19: Image Classification
VGGNet is a deep convolutional neural network developed by researchers from Oxford University's Visual Geometry Group and Google DeepMind. It explores the relationship between the depth of a convolutional neural network and its performance. By repeatedly stacking 3*3 A small convolution kernel and a 2*2 maximum pooling layer have successfully constructed a 16-19 layer deep convolutional neural network. Compared with the previous state-of-the-art network structure, the error rate of VGGNet is greatly reduced. In the VGGNet paper, all the small convolution kernels of 3*3 and the largest pooling kernel of 2*2 are used to continuously deepen the network structure. Improve performance.
VGG19 contains 19 hidden layers (16 convolutional layers and 3 fully connected layers)
The model can be found [here](https://keras.io/api/applications/vgg/)
## CONTENTS
- [Performance](#performance)
- [Model Conversion](#model-conversion)
- [Inference](#inference)
## Performance
|Device|SoC|Runtime|Model|Size (pixels)|Inference Time (ms)|Precision|Compute Unit|Model Download|
|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|
|AidBox QCS6490|QCS6490|QNN|VGG19|224|20.3|INT8|NPU|[model download](https://huggingface.co/aidlux/VGG19/blob/main/Models/QCS6490/vgg19_int8.qnn.serialized.bin)|
|AidBox QCS6490|QCS6490|QNN|VGG19|224|-|INT16|NPU|[model download](https://huggingface.co/aidlux/VGG19/blob/main/Models/QCS6490/vgg19_int16.qnn.serialized.bin)|
|AidBox QCS6490|QCS6490|SNPE|VGG19|224|18.4|INT8|NPU|[model download](https://huggingface.co/aidlux/VGG19/blob/main/Models/QCS6490/vgg19_int8_htp_snpe2.dlc)|
|AidBox QCS6490|QCS6490|SNPE|VGG19|224|-|INT16|NPU|[model download](https://huggingface.co/aidlux/VGG19/blob/main/Models/QCS6490/vgg19_int16_htp_snpe2.dlc)|
|APLUX QCS8550|QCS8550|QNN|VGG19|224|6.6|INT8|NPU|[model download](https://huggingface.co/aidlux/VGG19/blob/main/Models/QCS8550/vgg19_int8.qnn.serialized.bin)|
|APLUX QCS8550|QCS8550|QNN|VGG19|224|11.1|INT16|NPU|[model download](https://huggingface.co/aidlux/VGG19/blob/main/Models/QCS8550/vgg19_int16.qnn.serialized.bin)|
|APLUX QCS8550|QCS8550|SNPE|VGG19|224|4.2|INT8|NPU|[model download](https://huggingface.co/aidlux/VGG19/blob/main/Models/QCS8550/vgg19_int8_htp_snpe2.dlc)|
|APLUX QCS8550|QCS8550|SNPE|VGG19|224|5.6|INT16|NPU|[model download](https://huggingface.co/aidlux/VGG19/blob/main/Models/QCS8550/vgg19_int16_htp_snpe2.dlc)|
|AidBox GS865|QCS8250|SNPE|VGG19|224|-|INT8|NPU|[model download]()|
## Model Conversion
Demo models converted from [**AIMO(AI Model Optimizier)**](https://aidlux.com/en/product/aimo).
The source model **YOLOv5s.onnx** can be found [here](https://huggingface.co/aplux/YOLOv5/blob/main/yolov5s.onnx).
The demo model conversion step on AIMO can be found blow:
|Device|SoC|Runtime|Model|Size (pixels)|Precision|Compute Unit|AIMO Conversion Steps|
|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|
|AidBox QCS6490|QCS6490|QNN|VGG19|640|INT8|NPU|[View Steps](https://huggingface.co/aplux/VGG19/blob/main/AIMO/QCS6490/aimo_vgg19_qnn_int8.png)|
|AidBox QCS6490|QCS6490|QNN|VGG19|640|INT16|NPU|[View Steps](https://huggingface.co/aplux/VGG19/blob/main/AIMO/QCS6490/aimo_vgg19_qnn_int16.png)|
|AidBox QCS6490|QCS6490|SNPE|VGG19|640|INT8|NPU|[View Steps](https://huggingface.co/aplux/VGG19/blob/main/AIMO/QCS6490/aimo_vgg19_snpe_int8.png)|
|AidBox QCS6490|QCS6490|SNPE|VGG19|640|INT16|NPU|[View Steps](https://huggingface.co/aplux/VGG19/blob/main/AIMO/QCS6490/aimo_vgg19_snpe_int16.png)|
|APLUX QCS8550|QCS8550|QNN|VGG19|640|INT8|NPU|[View Steps](https://huggingface.co/aplux/VGG19/blob/main/AIMO/QCS8550/aimo_vgg19_qnn_int8.png)|
|APLUX QCS8550|QCS8550|QNN|VGG19|640|INT16|NPU|[View Steps](https://huggingface.co/aplux/VGG19/blob/main/AIMO/QCS8550/aimo_vgg19_qnn_int16.png)|
|APLUX QCS8550|QCS8550|SNPE|VGG19|640|INT8|NPU|[View Steps](https://huggingface.co/aplux/VGG19/blob/main/AIMO/QCS8550/aimo_vgg19_snpe_int8.png)|
|APLUX QCS8550|QCS8550|SNPE|VGG19|640|INT16|NPU|[View Steps](https://huggingface.co/aplux/VGG19/blob/main/AIMO/QCS8550/aimo_vgg19_snpe_int16.png)|
|AidBox GS865|QCS8250|SNPE|VGG19|640|INT8|NPU|[View Steps]()|
## Inference
### Step1: convert model
a. Prepare source model in onnx format. The source model can be found [here](https://huggingface.co/aplux/VGG19/blob/main/vgg19.onnx).
b. Login [AIMO](https://aidlux.com/en/product/aimo) and convert source model to target format. The model conversion step can follow **AIMO Conversion Step** in [Model Conversion Sheet](#model-conversion).
c. After conversion task done, download target model file.
### Step2: install AidLite SDK
The installation guide of AidLite SDK can be found [here](https://huggingface.co/datasets/aplux/AIToolKit/blob/main/AidLite%20SDK%20Development%20Documents.md#installation).
### Step3: run demo program |