This model contains the checkpoints of paper "Towards Universal Soccer Video Understanding": https://arxiv.org/abs/2412.01820/.

Project Page | Paper | Codes | Dataset (Soon) | Checkpoints

Requirements

A suitable conda environment named UniSoccer can be created and activated with:

conda env create -f environment.yaml
conda activate UniSoccer

Train

Pretrain MatchVision Encoder

As described in paper, we have two methods for pretraining MatchVision backbone (supervised classification & contrastive commentary). You can train both this two methods as following shows:

First of all, you should prepare textual data as the format in train_data/json, and preprocess soccer videos into 30 second clips (15s before and after timestamps) for pretraining.

Supervised Classification

python task/pretrain_MatchVoice_Classifier.py config/pretrain_classification.py

Contrastive Commentary Retrieval

python task/pretrain_contrastive.py config/pretrain_contrastive.py

Also, you could finetune MatchVision with

python task/finetune_contrastive.py config/finetune_contrastive.py

To be noted, you should replace the folders in task and config files.

Train Downstream Tasks

You could train the commentary task by several different methods:

  1. Use mp4 files
python task/downstream_commentary_new_benchmark.py 

For this method, you might train the commentary model MatchVoice with open visual encoder or language decoder, so you should crop the videos as 30s clips named as json files shows.

  1. Use .npy files
python task/downstream_commentary.py

For this method, you cannot open the visual encoder, so you can extract features of all video clips and change ".mp4" by ".npy" as file names.

To be noted, folder words_world records the token ids of all words in LLaMA-3(8B) tokenizer of different datasets as

  • match_time.pkl: MatchTime dataset (Link here)
  • soccerreplay-1988.pkl: SoccerReplay-1988 dataset. (Not released yet)
  • merge.pkl: Union set of MatchTime & SoccerReplay-1988

Inference

For inference, you could use the following codes, be sure that you have correctly crop the video clips, which is in the same format as before.

python inference/inference.py

Then, you could test the metrics for output sample.csv by:

python inference/score_single.py --csv_path inference/sample.csv

Citation

If you use this code and data for your research or project, please cite:

@misc{rao2024unisoccer, title = {Towards Universal Soccer Video Understanding}, author = {Rao, Jiayuan and Wu, Haoning and Jiang, Hao and Zhang, Ya and Wang, Yanfeng and Xie, Weidi}, journal = {arXiv preprint arXiv:2412.01820}, year = {2024}, }

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .