yolov8s_visdrone / README.md
andreysher's picture
Add dataset to model card and config.json
13f2711
---
license: apache-2.0
datasets:
- visdrone
model-index:
- name: ENOT-AutoDL/yolov8s_visdrone
results:
- task:
type: object-detection
dataset:
type: visdrone
name: visdrone
metrics:
- type: precision
value: 49,4
name: mAP50(baseline)
- type: precision
value: 48,4
name: mAP50(GMACs x2)
- type: precision
value: 46,0
name: mAP50(GMACs x3)
library_name: ultralytics
pipeline_tag: object-detection
tags:
- yolov8
- ENOT-AutoDL
- yolo
- vision
- ultralytics
- object-detection
---
# ENOT-AutoDL YOLOv8 optimization on VisDrone dataset
This repository contains models accelerated with [ENOT-AutoDL](https://pypi.org/project/enot-autodl/) framework.
We trained yolov8s on VisDrone dataset and used it as our baseline.
Also we provide simple python script to measure flops and metrics.
## YOLOv8 Small
| Model | GMACs | Image Size | mAP50 | mAP50-95 |
|---------------------------|:-----------:|:-----------:|:-----------:|:-----------:|
| **[YOLOv8 Ultralytics Baseline](https://docs.ultralytics.com/datasets/detect/visdrone/#dataset-yaml)** | 14,28 | 640 | 40,2 | 24,2 |
| **YOLOv8n Enot Baseline** | 8,57 | 928 | 42,9 | 26,0 |
| **YOLOv8s Enot Baseline** | 30,03 | 928 | 49,4 | 30,6 |
| **YOLOv8s (x2)** | 15,01 (x2) | 928 | 48,3 (-1,1) | 29,8 (-0,8) |
| **YOLOv8s (x3)** | 10,01 (x3) | 928 | 46,0 (-3,4) | 28,3 (-2,3) |
# Validation
To validate results, follow this steps:
1. Install all required packages:
```bash
pip install -r requrements.txt
```
2. Use validation script:
```bash
python validate.py enot_neural_architecture_selection_x2/weights/best.pt --imgsz 928
```
3. Use measure_macs script:
```bash
python measure_macs.py enot_neural_architecture_selection_x2/weights/best.pt --imgsz 928
```