Update README.md
Browse files
README.md
CHANGED
@@ -4,7 +4,7 @@ emoji: π
|
|
4 |
colorFrom: blue
|
5 |
colorTo: yellow
|
6 |
sdk: gradio
|
7 |
-
sdk_version: 4.
|
8 |
app_file: app.py
|
9 |
pinned: false
|
10 |
license: mit
|
@@ -12,38 +12,43 @@ license: mit
|
|
12 |
|
13 |
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
14 |
|
15 |
-
# COVER
|
16 |
|
17 |
-
|
18 |
-
Official Code, Demo, Weights for the [Comprehensive Video Quality Evaluator (COVER)].
|
19 |
-
|
20 |
-
# Todo:: update date, hugging face model below
|
21 |
-
- xx xxx, 2024: We upload weights of [COVER](https://github.com/vztu/COVER/release/Model/COVER.pth) and [COVER++](TobeContinue) to Hugging Face models.
|
22 |
-
- xx xxx, 2024: We upload Code of [COVER](https://github.com/vztu/COVER)
|
23 |
-
- 12 Apr, 2024: COVER has been accepted by CVPR Workshop2024.
|
24 |
|
|
|
|
|
25 |
|
26 |
-
|
27 |
-
|
28 |
-
|
29 |
-
<a href="https://colab.research.google.com/github/taskswithcode/COVER/blob/master/TWCCOVER.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="google colab logo"></a>
|
30 |
-
|
31 |
|
32 |
-
|
33 |
-
|
|
|
34 |
|
|
|
|
|
|
|
35 |
|
36 |
-
|
37 |
|
|
|
38 |
|
39 |
-
|
40 |
-
# Todo:: Add Introduction here
|
41 |
|
42 |
-
###
|
43 |
|
44 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
45 |
|
46 |
-
|
|
|
47 |
|
48 |
## Install
|
49 |
|
@@ -83,15 +88,13 @@ Or choose any video you like to predict its quality:
|
|
83 |
|
84 |
### Outputs
|
85 |
|
86 |
-
#### ITU-Standarized Overall Video Quality Score
|
87 |
-
|
88 |
The script can directly score the video's overall quality (considering all perspectives).
|
89 |
|
90 |
```shell
|
91 |
python evaluate_one_video.py -v $YOUR_SPECIFIED_VIDEO_PATH$
|
92 |
```
|
93 |
|
94 |
-
The final output score is
|
95 |
|
96 |
|
97 |
## Evaluate on a Exsiting Video Dataset
|
@@ -134,8 +137,6 @@ After downloading, kindly put them under the `../datasets` or anywhere but remem
|
|
134 |
|
135 |
Now you can employ ***head-only/end-to-end transfer*** of COVER to get dataset-specific VQA prediction heads.
|
136 |
|
137 |
-
We still recommend **head-only** transfer. As we have evaluated in the paper, this method has very similar performance with *end-to-end transfer* (usually 1%~2% difference), but will require **much less** GPU memory, as follows:
|
138 |
-
|
139 |
```shell
|
140 |
python transfer_learning.py -t $YOUR_SPECIFIED_DATASET_NAME$
|
141 |
```
|
@@ -147,14 +148,10 @@ For existing public datasets, type the following commands for respective ones:
|
|
147 |
- `python transfer_learning.py -t val-cvd2014` for CVD2014.
|
148 |
- `python transfer_learning.py -t val-livevqc` for LIVE-VQC.
|
149 |
|
150 |
-
|
151 |
-
As the backbone will not be updated here, the checkpoint saving process will only save the regression heads with only `398KB` file size (compared with `200+MB` size of the full model). To use it, simply replace the head weights with the official weights [COVER.pth](https://github.com/vztu/COVER/release/Model/COVER.pth).
|
152 |
|
153 |
We also support ***end-to-end*** fine-tune right now (by modifying the `num_epochs: 0` to `num_epochs: 15` in `./cover.yml`). It will require more memory cost and more storage cost for the weights (with full parameters) saved, but will result in optimal accuracy.
|
154 |
|
155 |
-
Fine-tuning curves by authors can be found here: [Official Curves](https://wandb.ai/timothyhwu/COVER) for reference.
|
156 |
-
|
157 |
-
|
158 |
## Visualization
|
159 |
|
160 |
### WandB Training and Evaluation Curves
|
@@ -169,9 +166,21 @@ Thanks for every participant of the subjective studies!
|
|
169 |
|
170 |
Should you find our work interesting and would like to cite it, please feel free to add these in your references!
|
171 |
|
172 |
-
|
173 |
-
# Todo, add bibtex of cover below
|
174 |
```bibtex
|
175 |
-
%
|
|
|
|
|
|
|
|
|
|
|
|
|
176 |
|
177 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
colorFrom: blue
|
5 |
colorTo: yellow
|
6 |
sdk: gradio
|
7 |
+
sdk_version: 4.36.1
|
8 |
app_file: app.py
|
9 |
pinned: false
|
10 |
license: mit
|
|
|
12 |
|
13 |
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
14 |
|
15 |
+
# π [CVPRW 2024] [COVER](https://openaccess.thecvf.com/content/CVPR2024W/AI4Streaming/papers/He_COVER_A_Comprehensive_Video_Quality_Evaluator_CVPRW_2024_paper.pdf): A Comprehensive Video Quality Evaluator.
|
16 |
|
17 |
+
π π₯ **Winner solution for [Video Quality Assessment Challenge](https://codalab.lisn.upsaclay.fr/competitions/17340) at the 1st [AIS 2024](https://ai4streaming-workshop.github.io/) workshop @ CVPR 2024**
|
|
|
|
|
|
|
|
|
|
|
|
|
18 |
|
19 |
+
Official Code for [CVPR Workshop 2024] Paper *"COVER: A Comprehensive Video Quality Evaluator"*.
|
20 |
+
Official Code, Demo, Weights for the [Comprehensive Video Quality Evaluator (COVER)](https://openaccess.thecvf.com/content/CVPR2024W/AI4Streaming/papers/He_COVER_A_Comprehensive_Video_Quality_Evaluator_CVPRW_2024_paper.pdf).
|
21 |
|
22 |
+
- 29 May, 2024: We create a space for [COVER](https://huggingface.co/spaces/Sorakado/COVER) on Hugging Face.
|
23 |
+
- 09 May, 2024: We upload Code of [COVER](https://github.com/vztu/COVER).
|
24 |
+
- 12 Apr, 2024: COVER has been accepted by CVPR Workshop2024.
|
|
|
|
|
25 |
|
26 |
+
![visitors](https://visitor-badge.laobi.icu/badge?page_id=vztu/COVER) [![](https://img.shields.io/github/stars/vztu/COVER)](https://github.com/vztu/COVER)
|
27 |
+
[![State-of-the-Art](https://cdn.rawgit.com/sindresorhus/awesome/d7305f38d29fed78fa85652e3a63e154dd8e8829/media/badge.svg)](https://github.com/vztu/COVER)
|
28 |
+
<a href="https://huggingface.co/spaces/Sorakado/COVER"><img src="./figs/deploy-on-spaces-sm-dark.svg" alt="hugging face log"></a>
|
29 |
|
30 |
+
## Introduction
|
31 |
+
- Existing UGC VQA models strive to quantify quality degradation mainly from technical aspect, with a few considering aesthetic or semantic aspects, but no model has addressed all three aspects simultaneously.
|
32 |
+
- The demand for high-resolution and high-frame-rate videos on social media platforms presents new challenges for VQA tasks, as they must ensure effectiveness while also meeting real-time requirements.
|
33 |
|
34 |
+
## the proposed COVER
|
35 |
|
36 |
+
*This inspires us to develop comprehensive and efficient model for UGC VQA task*
|
37 |
|
38 |
+
![Fig](./figs/approach.jpg)
|
|
|
39 |
|
40 |
+
### COVER
|
41 |
|
42 |
+
Results comparison:
|
43 |
+
| Dataset: YT-UGC | SROCC | KROCC | PLCC | RMSE | Run Time |
|
44 |
+
| ---- | ---- | ---- | ---- | ---- | ---- |
|
45 |
+
| [**COVER**](https://github.com/vztu/COVER/release/Model/COVER.pth) | 0.9143 | 0.7413 | 0.9122 | 0.2519 | 79.37ms |
|
46 |
+
| TVQE (Wang *et al*, CVPRWS 2024) | 0.9150 | 0.7410 | 0.9182 | ------- | 705.30ms |
|
47 |
+
| Q-Align (Zhang *et al, CVPRWS 2024) | 0.9080 | 0.7340 | 0.9120 | ------- | 1707.06ms |
|
48 |
+
| SimpleVQA+ (Sun *et al, CVPRWS 2024) | 0.9060 | 0.7280 | 0.9110 | ------- | 245.51ms |
|
49 |
|
50 |
+
The run time is measured on an NVIDIA A100 GPU. A clip
|
51 |
+
of 30 frames of 4K resolution 3840Γ2160 is used as input.
|
52 |
|
53 |
## Install
|
54 |
|
|
|
88 |
|
89 |
### Outputs
|
90 |
|
|
|
|
|
91 |
The script can directly score the video's overall quality (considering all perspectives).
|
92 |
|
93 |
```shell
|
94 |
python evaluate_one_video.py -v $YOUR_SPECIFIED_VIDEO_PATH$
|
95 |
```
|
96 |
|
97 |
+
The final output score is the sum of all perspectives.
|
98 |
|
99 |
|
100 |
## Evaluate on a Exsiting Video Dataset
|
|
|
137 |
|
138 |
Now you can employ ***head-only/end-to-end transfer*** of COVER to get dataset-specific VQA prediction heads.
|
139 |
|
|
|
|
|
140 |
```shell
|
141 |
python transfer_learning.py -t $YOUR_SPECIFIED_DATASET_NAME$
|
142 |
```
|
|
|
148 |
- `python transfer_learning.py -t val-cvd2014` for CVD2014.
|
149 |
- `python transfer_learning.py -t val-livevqc` for LIVE-VQC.
|
150 |
|
151 |
+
As the backbone will not be updated here, the checkpoint saving process will only save the regression heads. To use it, simply replace the head weights with the official weights [COVER.pth](https://github.com/vztu/COVER/release/Model/COVER.pth).
|
|
|
152 |
|
153 |
We also support ***end-to-end*** fine-tune right now (by modifying the `num_epochs: 0` to `num_epochs: 15` in `./cover.yml`). It will require more memory cost and more storage cost for the weights (with full parameters) saved, but will result in optimal accuracy.
|
154 |
|
|
|
|
|
|
|
155 |
## Visualization
|
156 |
|
157 |
### WandB Training and Evaluation Curves
|
|
|
166 |
|
167 |
Should you find our work interesting and would like to cite it, please feel free to add these in your references!
|
168 |
|
|
|
|
|
169 |
```bibtex
|
170 |
+
%AIS 2024 VQA challenge
|
171 |
+
@article{conde2024ais,
|
172 |
+
title={AIS 2024 challenge on video quality assessment of user-generated content: Methods and results},
|
173 |
+
author={Conde, Marcos V and Zadtootaghaj, Saman and Barman, Nabajeet and Timofte, Radu and He, Chenlong and Zheng, Qi and Zhu, Ruoxi and Tu, Zhengzhong and Wang, Haiqiang and Chen, Xiangguang and others},
|
174 |
+
journal={arXiv preprint arXiv:2404.16205},
|
175 |
+
year={2024}
|
176 |
+
}
|
177 |
|
178 |
+
%cover
|
179 |
+
@article{cover2024cpvrws,
|
180 |
+
title={COVER: A comprehensive video quality evaluator},
|
181 |
+
author={Chenlong, He and Qi, Zheng and Ruoxi, Zhu and Xiaoyang, Zeng and
|
182 |
+
Yibo, Fan and Zhengzhong, Tu},
|
183 |
+
journal={In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
|
184 |
+
year={2024}
|
185 |
+
}
|
186 |
+
```
|