syhao777 commited on
Commit
f49669d
1 Parent(s): 72345cb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +99 -0
README.md CHANGED
@@ -1,3 +1,102 @@
1
  ---
2
  license: other
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: other
3
  ---
4
+ # DIVOTrack: A Novel Dataset and Baseline Method for Cross-View Multi-Object Tracking in DIVerse Open Scenes
5
+
6
+ This repository contains the details of dataset and the Pytorch implementation of Baseline Method CrossMOT of the Paper:
7
+ [DIVOTrack: A Novel Dataset and Baseline Method for Cross-View Multi-Object Tracking in DIVerse Open Scenes](https://arxiv.org/abs/2302.07676)
8
+
9
+
10
+ ## Abstract
11
+ Cross-view multi-object tracking aims to link objects between frames and camera views with substantial overlaps. Although cross-view multi-object tracking has received increased attention in recent years, existing datasets still have several issues, including 1) missing real-world scenarios, 2) lacking diverse scenes, 3) owning a limited number of tracks, 4) comprising only static cameras, and 5) lacking standard benchmarks, which hinder the investigation and comparison of cross-view tracking methods. To solve the aforementioned issues, we introduce **DIVOTrack**: a new cross-view multi-object tracking dataset for **DIV**erse **O**pen scenes with dense tracking pedestrians in realistic and non-experimental environments. Our DIVOTrack has ten distinct scenarios and 953 cross-view tracks, surpassing all cross-view multi-object tracking datasets currently available. Furthermore, we provide a novel baseline cross-view tracking method with a unified joint detection and cross-view tracking framework named CrossMOT, which learns object detection, single-view association, and cross-view matching with an all-in-one embedding model. Finally, we present a summary of current methodologies and a set of standard benchmarks with our DIVOTrack to provide a fair comparison and conduct a comprehensive analysis of current approaches and our proposed CrossMOT.
12
+
13
+
14
+ - **<a href="#des"> <u>Dataset Description</u>**</a>
15
+ - **<a href="#str"> <u>Dataset Structure</u>**</a>
16
+ - **<a href="#dow"> <u>Dataset Downloads</u>**</a>
17
+ - **<a href="#det"> <u>Training Detector</u>**</a>
18
+ - **<a href="#sin"> <u>Single-view Tracking</u>**</a>
19
+ - **<a href="#cro"> <u>Cross-view Tracking</u>**</a>
20
+ - **<a href="#ref"> <u>Reference</u>**</a>
21
+ - **<a href="#con"> <u>Contact</u>**</a>
22
+
23
+ The test result of the cross-view MOT baseline method *MvMHAT* on the DIVOTrack.
24
+ ![test.gif](asset/test.gif)
25
+
26
+ The ground truth of the DIVOTrack.
27
+ ![gt.gif](asset/gt.gif)
28
+
29
+ ## <a id="des">Dataset Description</a>
30
+ We collect data in 10 different real-world scenarios, named: `'Circle', 'Shop', 'Moving', 'Park', 'Ground', 'Gate1', 'Floor', 'Side', 'Square', 'Gate2'`. All
31
+ the sequences are captured by using 3 moving cameras: `'View1', 'View2', 'View3'` and are manually synchronized.
32
+
33
+ In the old version, the corresponding scenarios named: `'circleRegion', 'innerShop', 'movingView', 'park', 'playground', 'shopFrontGate', 'shopSecondFloor', 'shopSideGate', 'shopSideSquare', 'southGate'`. The corresponding camera named: `'Drone', 'View1', 'View2'`.
34
+
35
+ ### <a id="str">Dataset Structure</a>
36
+ The structure of our dataset as:
37
+ ```
38
+ DIVOTrack
39
+ └─────datasets
40
+ └─────DIVO
41
+ ├───images
42
+ │ ├───annotations
43
+ │ ├───dets
44
+ │ ├───train
45
+ │ └───test
46
+ ├───labels_with_ids
47
+ │ ├───train
48
+ │ └───test
49
+ ├───ReID_format
50
+ │ ├───bounding_box_test
51
+ │ ├───bounding_box_train
52
+ │ └───query
53
+ └───boxes.json
54
+
55
+ ```
56
+ ### <a id="dow">Dataset Downloads</a>
57
+ The whole dataset can download from [GoogleDrive](https://drive.google.com/drive/folders/1RCk95TdFv3Tt7gVuyxJasiHG1IPE6jkX?usp=sharing). **Note that, each file needs to unzip by the password. You can decompress each `.zip` file in its folder after send us (shengyuhao@zju.edu.cn, gaoangwang@intl.zju.edu.cn) the License in any format.** After that, you should run `generate_ini.py` to generate `seqinfo.ini` file.
58
+
59
+ ## <a id="det">Training Detector</a>
60
+ The training process of our detector is in `./Training_detector/` and the details can see from [Training_detector/README.md](https://github.com/shengyuhao/DIVOTrack/tree/main/Training_Detector#readme).
61
+ ## <a id="sin">Single-view Tracking</a>
62
+ We conducted experiments on DIVOTrack in fix benchmarks:
63
+
64
+ | Benchmark | HOTA ↑ | IDF1 ↑ | MOTA ↑ | MOTP ↑ | MT ↑ | ML ↓ | AssA ↑ | IDSw ↓ | FM ↓ |
65
+ | --------- | ------ | ------ | ------ | ------ | ---- | ---- | ------ | ------ | ---- |
66
+ | [DeepSort](./Single_view_Tracking/Deepsort/) | 54.3 | 59.9 | 79.6 | 81.2 | 462 | 50 | 45.0 | 1,920 | **2,504** |
67
+ | [CenterTrack](./Single_view_Tracking/CenterTrack/) | 55.3 | 62.2 | 73.4 | 80.6 | **534** | 35 | 49.2 | 1,631 | 2,950 |
68
+ | [Tracktor](./Single_view_Tracking/Tracktor/) | 48.4 | 56.2 | 66.6 | 80.8 | 517 | **22** | 40.3 | 1,382 | 3,337 |
69
+ | [FairMOT](./Single_view_Tracking/FairMOT/) | 65.3 | 78.2 | 82.7 | 81.9 | 486 | 48 | **62.7** | 731 | 3,498 |
70
+ | [TraDeS](./Single_view_Tracking/TraDeS/) | 58.9 | 67.3 | 74.2 | **82.3** | 504 | 38 | 54.0 | 1,263 | 2,647 |
71
+
72
+ Each single-view tracking baseline is evaluated using [TrackEval](https://github.com/shengyuhao/DIVOTrack/tree/main/TrackEval#readme).
73
+
74
+ ## <a id="cro">Cross-view Tracking</a>
75
+ We conducted experiments on the DIVOTrack dataset using six benchmarks as well as our proposed method [CrossMOT](./CrossMOT/)
76
+ | Benchmark | CVMA ↑ | CVIDF1 ↑ |
77
+ | --------- | ------ | -------- |
78
+ | [OSNet](./Cross_view_Tracking/OSNet/) | 34.3 | 46.0 |
79
+ | [Strong](./Cross_view_Tracking/StrongReID/) | 40.9 | 45.9 |
80
+ | [AGW](./Cross_view_Tracking/AGW/) | 57.0 | 56.8 |
81
+ | [MvMHAT](./Cross_view_Tracking/MvMHAT/) | 61.0 | 62.6 |
82
+ | [CT](./Cross_view_Tracking/CT/) | 64.9 | 65.0 |
83
+ | [MGN](./Cross_view_Tracking/MGN/) | 33.5 | 39.4 |
84
+ | [CrossMOT](./CrossMOT/) | **72.4** | **71.1** |
85
+
86
+ With the exception of CrossMOT, all of the other Re-ID methods require [Multi_view_Tracking](https://github.com/shengyuhao/DIVOTrack/tree/main/Multi_view_Tracking#readme) to predict the tracking results after they are obtained. Finally, the results of CVMA and CVIDF1 are obtained through [MOTChallengeEvalKit_cv_test](https://github.com/shengyuhao/DIVOTrack/tree/main/MOTChallengeEvalKit_cv_test#readme).
87
+
88
+
89
+ ## <a id="ref">Reference</a>
90
+ Any use whatsoever of this dataset and its associated software shall constitute your acceptance of the terms of this agreement. By using the dataset and its associated software, you agree to cite the papers of the authors, in any of your publications by you and your collaborators that make any use of the dataset, in the following format:
91
+ ```
92
+ @article{wangdivotrack,
93
+ title={DIVOTrack: A Novel Dataset and Baseline Method for Cross-View Multi-Object Tracking in DIVerse Open Scenes},
94
+ author={Shenghao Hao, Peiyuan Liu, Yibing Zhan, Kaixun Jin, Zuozhu Liu, Mingli Song, Jenq-Neng Hwang, Gaoang Wang},
95
+ journal={arXiv preprint arXiv:2302.07676},
96
+ year={2023}
97
+ }
98
+ ```
99
+ The license agreement for data usage implies the citation of the paper above. Please notice that citing the dataset URL instead of the publications would not be compliant with this license agreement. You can read the LICENSE from [LICENSE](https://github.com/shengyuhao/DIVOTrack/blob/main/LICENSE.md).
100
+
101
+ ## <a id="con">Contact</a>
102
+ If you have any concerns, please contact [shengyuhao@zju.edu.cn](shengyuhao@zju.edu.cn)