syhao777 commited on
Commit
5e3ac90
β€’
1 Parent(s): f49669d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -43
README.md CHANGED
@@ -1,9 +1,6 @@
1
- ---
2
- license: other
3
- ---
4
  # DIVOTrack: A Novel Dataset and Baseline Method for Cross-View Multi-Object Tracking in DIVerse Open Scenes
5
 
6
- This repository contains the details of dataset and the Pytorch implementation of Baseline Method CrossMOT of the Paper:
7
  [DIVOTrack: A Novel Dataset and Baseline Method for Cross-View Multi-Object Tracking in DIVerse Open Scenes](https://arxiv.org/abs/2302.07676)
8
 
9
 
@@ -14,20 +11,12 @@ Cross-view multi-object tracking aims to link objects between frames and camera
14
  - **<a href="#des"> <u>Dataset Description</u>**</a>
15
  - **<a href="#str"> <u>Dataset Structure</u>**</a>
16
  - **<a href="#dow"> <u>Dataset Downloads</u>**</a>
17
- - **<a href="#det"> <u>Training Detector</u>**</a>
18
- - **<a href="#sin"> <u>Single-view Tracking</u>**</a>
19
- - **<a href="#cro"> <u>Cross-view Tracking</u>**</a>
20
  - **<a href="#ref"> <u>Reference</u>**</a>
21
  - **<a href="#con"> <u>Contact</u>**</a>
22
 
23
- The test result of the cross-view MOT baseline method *MvMHAT* on the DIVOTrack.
24
- ![test.gif](asset/test.gif)
25
-
26
- The ground truth of the DIVOTrack.
27
- ![gt.gif](asset/gt.gif)
28
 
29
  ## <a id="des">Dataset Description</a>
30
- We collect data in 10 different real-world scenarios, named: `'Circle', 'Shop', 'Moving', 'Park', 'Ground', 'Gate1', 'Floor', 'Side', 'Square', 'Gate2'`. All
31
  the sequences are captured by using 3 moving cameras: `'View1', 'View2', 'View3'` and are manually synchronized.
32
 
33
  In the old version, the corresponding scenarios named: `'circleRegion', 'innerShop', 'movingView', 'park', 'playground', 'shopFrontGate', 'shopSecondFloor', 'shopSideGate', 'shopSideSquare', 'southGate'`. The corresponding camera named: `'Drone', 'View1', 'View2'`.
@@ -54,36 +43,7 @@ DIVOTrack
54
 
55
  ```
56
  ### <a id="dow">Dataset Downloads</a>
57
- The whole dataset can download from [GoogleDrive](https://drive.google.com/drive/folders/1RCk95TdFv3Tt7gVuyxJasiHG1IPE6jkX?usp=sharing). **Note that, each file needs to unzip by the password. You can decompress each `.zip` file in its folder after send us (shengyuhao@zju.edu.cn, gaoangwang@intl.zju.edu.cn) the License in any format.** After that, you should run `generate_ini.py` to generate `seqinfo.ini` file.
58
-
59
- ## <a id="det">Training Detector</a>
60
- The training process of our detector is in `./Training_detector/` and the details can see from [Training_detector/README.md](https://github.com/shengyuhao/DIVOTrack/tree/main/Training_Detector#readme).
61
- ## <a id="sin">Single-view Tracking</a>
62
- We conducted experiments on DIVOTrack in fix benchmarks:
63
-
64
- | Benchmark | HOTA ↑ | IDF1 ↑ | MOTA ↑ | MOTP ↑ | MT ↑ | ML ↓ | AssA ↑ | IDSw ↓ | FM ↓ |
65
- | --------- | ------ | ------ | ------ | ------ | ---- | ---- | ------ | ------ | ---- |
66
- | [DeepSort](./Single_view_Tracking/Deepsort/) | 54.3 | 59.9 | 79.6 | 81.2 | 462 | 50 | 45.0 | 1,920 | **2,504** |
67
- | [CenterTrack](./Single_view_Tracking/CenterTrack/) | 55.3 | 62.2 | 73.4 | 80.6 | **534** | 35 | 49.2 | 1,631 | 2,950 |
68
- | [Tracktor](./Single_view_Tracking/Tracktor/) | 48.4 | 56.2 | 66.6 | 80.8 | 517 | **22** | 40.3 | 1,382 | 3,337 |
69
- | [FairMOT](./Single_view_Tracking/FairMOT/) | 65.3 | 78.2 | 82.7 | 81.9 | 486 | 48 | **62.7** | 731 | 3,498 |
70
- | [TraDeS](./Single_view_Tracking/TraDeS/) | 58.9 | 67.3 | 74.2 | **82.3** | 504 | 38 | 54.0 | 1,263 | 2,647 |
71
-
72
- Each single-view tracking baseline is evaluated using [TrackEval](https://github.com/shengyuhao/DIVOTrack/tree/main/TrackEval#readme).
73
-
74
- ## <a id="cro">Cross-view Tracking</a>
75
- We conducted experiments on the DIVOTrack dataset using six benchmarks as well as our proposed method [CrossMOT](./CrossMOT/)
76
- | Benchmark | CVMA ↑ | CVIDF1 ↑ |
77
- | --------- | ------ | -------- |
78
- | [OSNet](./Cross_view_Tracking/OSNet/) | 34.3 | 46.0 |
79
- | [Strong](./Cross_view_Tracking/StrongReID/) | 40.9 | 45.9 |
80
- | [AGW](./Cross_view_Tracking/AGW/) | 57.0 | 56.8 |
81
- | [MvMHAT](./Cross_view_Tracking/MvMHAT/) | 61.0 | 62.6 |
82
- | [CT](./Cross_view_Tracking/CT/) | 64.9 | 65.0 |
83
- | [MGN](./Cross_view_Tracking/MGN/) | 33.5 | 39.4 |
84
- | [CrossMOT](./CrossMOT/) | **72.4** | **71.1** |
85
-
86
- With the exception of CrossMOT, all of the other Re-ID methods require [Multi_view_Tracking](https://github.com/shengyuhao/DIVOTrack/tree/main/Multi_view_Tracking#readme) to predict the tracking results after they are obtained. Finally, the results of CVMA and CVIDF1 are obtained through [MOTChallengeEvalKit_cv_test](https://github.com/shengyuhao/DIVOTrack/tree/main/MOTChallengeEvalKit_cv_test#readme).
87
 
88
 
89
  ## <a id="ref">Reference</a>
 
 
 
 
1
  # DIVOTrack: A Novel Dataset and Baseline Method for Cross-View Multi-Object Tracking in DIVerse Open Scenes
2
 
3
+ This repository contains the details of the dataset and the Pytorch implementation of the Baseline Method CrossMOT of the Paper:
4
  [DIVOTrack: A Novel Dataset and Baseline Method for Cross-View Multi-Object Tracking in DIVerse Open Scenes](https://arxiv.org/abs/2302.07676)
5
 
6
 
 
11
  - **<a href="#des"> <u>Dataset Description</u>**</a>
12
  - **<a href="#str"> <u>Dataset Structure</u>**</a>
13
  - **<a href="#dow"> <u>Dataset Downloads</u>**</a>
 
 
 
14
  - **<a href="#ref"> <u>Reference</u>**</a>
15
  - **<a href="#con"> <u>Contact</u>**</a>
16
 
 
 
 
 
 
17
 
18
  ## <a id="des">Dataset Description</a>
19
+ We collect data in 10 different real-world scenarios, named: `'Circle', 'Shop', 'Moving', 'Park', 'Ground', 'Gate1', 'Floor', 'Side', 'Square', and 'Gate2'`. All
20
  the sequences are captured by using 3 moving cameras: `'View1', 'View2', 'View3'` and are manually synchronized.
21
 
22
  In the old version, the corresponding scenarios named: `'circleRegion', 'innerShop', 'movingView', 'park', 'playground', 'shopFrontGate', 'shopSecondFloor', 'shopSideGate', 'shopSideSquare', 'southGate'`. The corresponding camera named: `'Drone', 'View1', 'View2'`.
 
43
 
44
  ```
45
  ### <a id="dow">Dataset Downloads</a>
46
+ The whole dataset can be downloaded from [GoogleDrive](https://drive.google.com/drive/folders/1RCk95TdFv3Tt7gVuyxJasiHG1IPE6jkX?usp=sharing). **Note that, each file needs to unzip by the password. You can decompress each `.zip` file in its folder after send us (shengyuhao@zju.edu.cn, gaoangwang@intl.zju.edu.cn) the License in any format.** After that, you should run `generate_ini.py` to generate `seqinfo.ini` file.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
47
 
48
 
49
  ## <a id="ref">Reference</a>