SportsMOT / README.md
EthanolY's picture
Update README.md
1ede4d0 verified
metadata
license: cc-by-nc-4.0
annotations_creators:
  - crowdsourced
task_categories:
  - object-detection
  - other
language:
  - en
tags:
  - video
  - multi-object tracking
pretty_name: SportsMOT
source_datasets:
  - MultiSports
extra_gated_heading: Acknowledge license to accept the repository
extra_gated_prompt: >-
  This work is licensed under a Creative Commons Attribution-NonCommercial 4.0
  International License
extra_gated_fields:
  Institute: text
  I want to use this dataset for:
    type: select
    options:
      - Research
      - Education
      - label: Other
        value: other
  I agree to use this dataset for non-commerical use ONLY: checkbox

Dataset Card for SportsMOT

Dataset Details

Dataset Description

Multi-object tracking (MOT) is a fundamental task in computer vision, aiming to estimate objects (e.g., pedestrians and vehicles) bounding boxes and identities in video sequences. We propose a large-scale multi-object tracking dataset named SportsMOT, consisting of 240 video clips from 3 categories (i.e., basketball, football and volleyball). The objective is to only track players on the playground (i.e., except for a number of spectators, referees and coaches) in various sports scenes.

Dataset Sources [optional]

Dataset Structure

Data in SportsMOT is organized in the form of MOT Challenge 17.

  splits_txt(video-split mapping)
    - basketball.txt
    - volleyball.txt
    - football.txt
    - train.txt
    - val.txt
    - test.txt
  scripts
    - mot_to_coco.py
    - sportsmot_to_trackeval.py
  dataset(in MOT challenge format)
    - train
      - VIDEO_NAME1
        - gt
        - img1
          - 000001.jpg
          - 000002.jpg
        - seqinfo.ini
    - val(the same hierarchy as train)
    - test
      - VIDEO_NAME1
        - img1
          - 000001.jpg
          - 000002.jpg
        - seqinfo.ini

Dataset Creation

Curation Rationale

Multi-object tracking (MOT) is a fundamental task in computer vision, aiming to estimate objects (e.g., pedestrians and vehicles) bounding boxes and identities in video sequences.

Prevailing human-tracking MOT datasets mainly focus on pedestrians in crowded street scenes (e.g., MOT17/20) or dancers in static scenes (DanceTrack). In spite of the increasing demands for sports analysis, there is a lack of multi-object tracking datasets for a variety of sports scenes, where the background is complicated, players possess rapid motion and the camera lens moves fast.

Source Data

We select three worldwide famous sports, football, basketball, and volleyball, and collect videos of high-quality professional games including NCAA, Premier League, and Olympics from MultiSports, which is a large dataset in sports area focusing on spatio-temporal action localization.

Annotation process

We annotate the collected videos according to the following guidelines.

  1. The entire athlete’s limbs and torso, excluding any other objects like balls touching the athlete’s body, are required to be annotated.

  2. The annotators are asked to predict the bounding box of the athlete in the case of occlusion, as long as the athletes have a visible part of body. However, if half of the athletes’ torso is outside the view, annotators should just skip them.

  3. We ask the annotators to confirm that each player has a unique ID throughout the whole clip.

Dataset Curators

Authors of SportsMOT: A Large Multi-Object Tracking Dataset in Multiple Sports Scenes

  • Yutao Cui

  • Chenkai Zeng

  • Xiaoyu Zhao

  • Yichun Yang

  • Gangshan Wu

  • Limin Wang

Citation Information

If you find this dataset useful, please cite as

@inproceedings{cui2023sportsmot,
  title={Sportsmot: A large multi-object tracking dataset in multiple sports scenes},
  author={Cui, Yutao and Zeng, Chenkai and Zhao, Xiaoyu and Yang, Yichun and Wu, Gangshan and Wang, Limin},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
  pages={9921--9931},
  year={2023}
}