VGG19: Image Classification

VGGNet is a deep convolutional neural network developed by researchers from Oxford University's Visual Geometry Group and Google DeepMind. It explores the relationship between the depth of a convolutional neural network and its performance. By repeatedly stacking 33 A small convolution kernel and a 22 maximum pooling layer have successfully constructed a 16-19 layer deep convolutional neural network. Compared with the previous state-of-the-art network structure, the error rate of VGGNet is greatly reduced. In the VGGNet paper, all the small convolution kernels of 33 and the largest pooling kernel of 22 are used to continuously deepen the network structure. Improve performance.

VGG19 contains 19 hidden layers (16 convolutional layers and 3 fully connected layers)

The model can be found here

CONTENTS

Performance

Device SoC Runtime Model Size (pixels) Inference Time (ms) Precision Compute Unit Model Download
AidBox QCS6490 QCS6490 QNN VGG19 224 20.3 INT8 NPU model download
AidBox QCS6490 QCS6490 QNN VGG19 224 - INT16 NPU model download
AidBox QCS6490 QCS6490 SNPE VGG19 224 18.4 INT8 NPU model download
AidBox QCS6490 QCS6490 SNPE VGG19 224 - INT16 NPU model download
APLUX QCS8550 QCS8550 QNN VGG19 224 6.6 INT8 NPU model download
APLUX QCS8550 QCS8550 QNN VGG19 224 11.1 INT16 NPU model download
APLUX QCS8550 QCS8550 SNPE VGG19 224 4.2 INT8 NPU model download
APLUX QCS8550 QCS8550 SNPE VGG19 224 5.6 INT16 NPU model download
AidBox GS865 QCS8250 SNPE VGG19 224 - INT8 NPU model download

Model Conversion

Demo models converted from AIMO(AI Model Optimizier).

The source model YOLOv5s.onnx can be found here.

The demo model conversion step on AIMO can be found blow:

Device SoC Runtime Model Size (pixels) Precision Compute Unit AIMO Conversion Steps
AidBox QCS6490 QCS6490 QNN VGG19 640 INT8 NPU View Steps
AidBox QCS6490 QCS6490 QNN VGG19 640 INT16 NPU View Steps
AidBox QCS6490 QCS6490 SNPE VGG19 640 INT8 NPU View Steps
AidBox QCS6490 QCS6490 SNPE VGG19 640 INT16 NPU View Steps
APLUX QCS8550 QCS8550 QNN VGG19 640 INT8 NPU View Steps
APLUX QCS8550 QCS8550 QNN VGG19 640 INT16 NPU View Steps
APLUX QCS8550 QCS8550 SNPE VGG19 640 INT8 NPU View Steps
APLUX QCS8550 QCS8550 SNPE VGG19 640 INT16 NPU View Steps
AidBox GS865 QCS8250 SNPE VGG19 640 INT8 NPU View Steps

Inference

Step1: convert model

a. Prepare source model in onnx format. The source model can be found here.

b. Login AIMO and convert source model to target format. The model conversion step can follow AIMO Conversion Step in Model Conversion Sheet.

c. After conversion task done, download target model file.

Step2: install AidLite SDK

The installation guide of AidLite SDK can be found here.

Step3: run demo program

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.