COVER / README.md
nanushio
- [MINOR] [CONFIG] [UPDATE] 1. update README.md
f318285
|
raw
history blame
6.1 kB
metadata
title: COVER
emoji: πŸƒ
colorFrom: blue
colorTo: yellow
sdk: gradio
sdk_version: 4.31.5
app_file: app.py
pinned: false
license: mit

Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference

COVER

Official Code for [CVPR Workshop2024] Paper "COVER: A Comprehensive Video Quality Evaluator". Official Code, Demo, Weights for the [Comprehensive Video Quality Evaluator (COVER)].

Todo:: update date, hugging face model below

  • xx xxx, 2024: We upload weights of COVER and COVER++ to Hugging Face models.
  • xx xxx, 2024: We upload Code of COVER
  • 12 Apr, 2024: COVER has been accepted by CVPR Workshop2024.

Todo:: update visitors below

visitors State-of-the-Art google colab logo

Todo:: update predicted score for YT-UGC challenge dataset specified by AIS

COVER Pseudo-labelled Quality scores of YT-UGC: CSV

PWC

Introduction

Todo:: Add Introduction here

the proposed COVER

This inspires us to

Fig

Install

The repository can be installed via the following commands:

git clone https://github.com/vztu/COVER 
cd COVER 
pip install -e . 
mkdir pretrained_weights 
cd pretrained_weights 
wget https://github.com/vztu/COVER/release/Model/COVER.pth
cd ..

Evaluation: Judge the Quality of Any Video

Try on Demos

You can run a single command to judge the quality of the demo videos in comparison with videos in VQA datasets.

    python evaluate_one_video.py -v ./demo/video_1.mp4

or

    python evaluate_one_video.py -v ./demo/video_2.mp4

Or choose any video you like to predict its quality:

    python evaluate_one_video.py -v $YOUR_SPECIFIED_VIDEO_PATH$

Outputs

ITU-Standarized Overall Video Quality Score

The script can directly score the video's overall quality (considering all perspectives).

    python evaluate_one_video.py -v $YOUR_SPECIFIED_VIDEO_PATH$

The final output score is averaged among all perspectives.

Evaluate on a Exsiting Video Dataset

    python evaluate_one_dataset.py -in $YOUR_SPECIFIED_DIR$ -out $OUTPUT_CSV_PATH$

Evaluate on a Set of Unlabelled Videos

    python evaluate_a_set_of_videos.py -in $YOUR_SPECIFIED_DIR$ -out $OUTPUT_CSV_PATH$

The results are stored as .csv files in cover_predictions in your OUTPUT_CSV_PATH.

Please feel free to use COVER to pseudo-label your non-quality video datasets.

Data Preparation

We have already converted the labels for most popular datasets you will need for Blind Video Quality Assessment, and the download links for the videos are as follows:

:book: LSVQ: Github

:book: KoNViD-1k: Official Site

:book: LIVE-VQC: Official Site

:book: YouTube-UGC: Official Site

(Please contact the original authors if the download links were unavailable.)

After downloading, kindly put them under the ../datasets or anywhere but remember to change the data_prefix respectively in the config file.

Training: Adapt COVER to your video quality dataset!

Now you can employ head-only/end-to-end transfer of COVER to get dataset-specific VQA prediction heads.

We still recommend head-only transfer. As we have evaluated in the paper, this method has very similar performance with end-to-end transfer (usually 1%~2% difference), but will require much less GPU memory, as follows:

    python transfer_learning.py -t $YOUR_SPECIFIED_DATASET_NAME$

For existing public datasets, type the following commands for respective ones:

  • python transfer_learning.py -t val-kv1k for KoNViD-1k.
  • python transfer_learning.py -t val-ytugc for YouTube-UGC.
  • python transfer_learning.py -t val-cvd2014 for CVD2014.
  • python transfer_learning.py -t val-livevqc for LIVE-VQC.

As the backbone will not be updated here, the checkpoint saving process will only save the regression heads with only 398KB file size (compared with 200+MB size of the full model). To use it, simply replace the head weights with the official weights COVER.pth.

We also support end-to-end fine-tune right now (by modifying the num_epochs: 0 to num_epochs: 15 in ./cover.yml). It will require more memory cost and more storage cost for the weights (with full parameters) saved, but will result in optimal accuracy.

Fine-tuning curves by authors can be found here: Official Curves for reference.

Visualization

WandB Training and Evaluation Curves

You can be monitoring your results on WandB!

Acknowledgement

Thanks for every participant of the subjective studies!

Citation

Should you find our work interesting and would like to cite it, please feel free to add these in your references!

Todo, add bibtex of cover below

%cover