EthanolY commited on
Commit
1ede4d0
1 Parent(s): d15af51

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +142 -3
README.md CHANGED
@@ -1,3 +1,142 @@
1
- ---
2
- license: cc-by-nc-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ annotations_creators:
4
+ - crowdsourced
5
+ task_categories:
6
+ - object-detection
7
+ - other
8
+ language:
9
+ - en
10
+ tags:
11
+ - video
12
+ - multi-object tracking
13
+ pretty_name: SportsMOT
14
+ source_datasets:
15
+ - MultiSports
16
+ extra_gated_heading: "Acknowledge license to accept the repository"
17
+ extra_gated_prompt: "This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License"
18
+ extra_gated_fields:
19
+ Institute: text
20
+ I want to use this dataset for:
21
+ type: select
22
+ options:
23
+ - Research
24
+ - Education
25
+ - label: Other
26
+ value: other
27
+ I agree to use this dataset for non-commerical use ONLY: checkbox
28
+ ---
29
+ # Dataset Card for SportsMOT
30
+
31
+ <!-- Provide a quick summary of the dataset. -->
32
+
33
+ ## Dataset Details
34
+
35
+ ### Dataset Description
36
+
37
+ <!-- Provide a longer summary of what this dataset is. -->
38
+ Multi-object tracking (MOT) is a fundamental task in computer vision, aiming to estimate objects (e.g., pedestrians and vehicles) bounding boxes and identities in video sequences. We propose a large-scale multi-object tracking dataset named SportsMOT, consisting of 240 video clips from 3 categories (i.e., basketball, football and volleyball). The objective is to only track players on the playground (i.e., except for a number of spectators, referees and coaches) in various sports scenes.
39
+
40
+
41
+ ### Dataset Sources [optional]
42
+
43
+ <!-- Provide the basic links for the dataset. -->
44
+
45
+ - **Repository:** https://github.com/MCG-NJU/SportsMOT
46
+ - **Paper:** https://arxiv.org/abs/2304.05170
47
+ - **Competiton:** https://codalab.lisn.upsaclay.fr/competitions/12424
48
+ - **Point of Contact:** mailto: yichunyang@smail.nju.edu.cn
49
+
50
+
51
+ ## Dataset Structure
52
+
53
+ <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
54
+
55
+ Data in SportsMOT is organized in the form of MOT Challenge 17.
56
+
57
+ ```
58
+ splits_txt(video-split mapping)
59
+ - basketball.txt
60
+ - volleyball.txt
61
+ - football.txt
62
+ - train.txt
63
+ - val.txt
64
+ - test.txt
65
+ scripts
66
+ - mot_to_coco.py
67
+ - sportsmot_to_trackeval.py
68
+ dataset(in MOT challenge format)
69
+ - train
70
+ - VIDEO_NAME1
71
+ - gt
72
+ - img1
73
+ - 000001.jpg
74
+ - 000002.jpg
75
+ - seqinfo.ini
76
+ - val(the same hierarchy as train)
77
+ - test
78
+ - VIDEO_NAME1
79
+ - img1
80
+ - 000001.jpg
81
+ - 000002.jpg
82
+ - seqinfo.ini
83
+ ```
84
+
85
+ ## Dataset Creation
86
+
87
+ ### Curation Rationale
88
+
89
+ <!-- Motivation for the creation of this dataset. -->
90
+ Multi-object tracking (MOT) is a fundamental task in computer vision, aiming to estimate objects (e.g., pedestrians and vehicles) bounding boxes and identities in video sequences.
91
+
92
+ Prevailing human-tracking MOT datasets mainly focus on pedestrians in crowded street scenes (e.g., MOT17/20) or dancers in static scenes (DanceTrack). In spite of the increasing demands for sports analysis, there is a lack of multi-object tracking datasets for a variety of sports scenes, where the background is complicated, players possess rapid motion and the camera lens moves fast.
93
+
94
+
95
+ ### Source Data
96
+
97
+ <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
98
+ > We select three worldwide famous sports, football, basketball, and volleyball, and collect videos of high-quality professional games including NCAA, Premier League, and Olympics from MultiSports, which is a large dataset in sports area focusing on spatio-temporal action localization.
99
+
100
+ #### Annotation process
101
+
102
+ <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
103
+
104
+ We annotate the collected videos according to the following guidelines.
105
+
106
+ 1. The entire athlete’s limbs and torso, excluding any other objects like balls touching the athlete’s body, are required to be annotated.
107
+
108
+ 2. The annotators are asked to predict the bounding box of the athlete in the case of occlusion, as long as the athletes have a visible part of body. However, if half of the athletes’ torso is outside the view, annotators should just skip them.
109
+
110
+ 3. We ask the annotators to confirm that each player has a unique ID throughout the whole clip.
111
+
112
+ ### Dataset Curators
113
+
114
+ Authors of [SportsMOT: A Large Multi-Object Tracking Dataset in Multiple Sports Scenes](https://arxiv.org/pdf/2304.05170)
115
+
116
+ - Yutao Cui
117
+
118
+ - Chenkai Zeng
119
+
120
+ - Xiaoyu Zhao
121
+
122
+ - Yichun Yang
123
+
124
+ - Gangshan Wu
125
+
126
+ - Limin Wang
127
+
128
+ ## Citation Information
129
+
130
+ <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
131
+
132
+ If you find this dataset useful, please cite as
133
+
134
+ ```
135
+ @inproceedings{cui2023sportsmot,
136
+ title={Sportsmot: A large multi-object tracking dataset in multiple sports scenes},
137
+ author={Cui, Yutao and Zeng, Chenkai and Zhao, Xiaoyu and Yang, Yichun and Wu, Gangshan and Wang, Limin},
138
+ booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
139
+ pages={9921--9931},
140
+ year={2023}
141
+ }
142
+ ```