File size: 4,326 Bytes
ad744da
 
 
0085bcb
7c4ea4d
243b168
85c6d1c
2663438
243b168
 
2663438
243b168
 
 
 
 
 
184146f
243b168
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
184146f
7c4ea4d
 
 
 
 
243b168
 
d5c8074
243b168
 
7c4ea4d
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
---
license: mit
---
# Towards Effective Multi-Moving-Camera Tracking: A New Dataset and Lightweight Link Model
[![](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-orange)](https://huggingface.co/datasets/jellyShuang/MMCT)

This repository contains the details of the dataset and the Pytorch implementation of the Paper:
[Towards Effective Multi-Moving-Camera Tracking: A New Dataset and Lightweight Link Model](https://arxiv.org/abs/2312.11035)

## Abstract
Ensuring driving safety for autonomous vehicles has become increasingly crucial, highlighting the need for systematic tracking of on-road pedestrians. Most vehicles are equipped with visual sensors, however, the large-scale visual data has not been well studied yet. Multi-target multi-camera (MTMC) tracking systems are composed of two modules: single-camera tracking (SCT) and inter-camera tracking (ICT). To reliably coordinate between them, MTMC tracking has been a very complicated task, while tracking across multiple moving cameras makes it even more challenging. In this paper, we focus on multi-target multi-moving-camera (MTMMC) tracking, which is attracting increasing attention from the research community. Observing there are few datasets for MTMMC tracking, we collect a new dataset, called Multi-Moving-Camera Track (MMCT), which contains sequences under various driving scenarios. To address the common problems of identity switch easily faced by most existing SCT trackers, especially for moving cameras due to ego-motion between the camera and targets, a lightweight appearance-free global link model, called Linker, is proposed to mitigate the identity switch by associating two disjoint tracklets of the same target into a complete trajectory within the same camera. Incorporated with Linker, existing SCT trackers generally obtain a significant improvement. Moreover, to alleviate the impact of the image style variations caused by different cameras, a color transfer module is effectively incorporated to extract cross-camera consistent appearance features for pedestrian association across moving cameras for ICT, resulting in a much improved MTMMC tracking system, which can constitute a step further towards coordinated mining of multiple moving cameras.

- **<a href="#des"> <u>Dataset Description</u>**</a>
  - **<a href="#str"> <u>Dataset Structure</u>**</a>
  - **<a href="#dow"> <u>Dataset Downloads</u>**</a>

## <a id="des">Dataset Description</a>
We collect data in 12 distinct scenarios: ''A', 'B', 'C',...'L''. Each scenario may include the interaction of two or three cameras on different cars. For example, scene A includes two sequences of `A-I` and `A-II`. There are 32 sequences in total.

### <a id="str">Dataset Structure</a>
```
MMCT
β”œβ”€β”€ data
β”‚ β”œβ”€β”€ gps
β”‚ └── labelS
└── images
 β”œβ”€β”€ 1
 β”‚ β”œβ”€β”€ A
 β”‚ β”‚ β”œβ”€β”€ IMG_0098-frag-s1-a-fps5.mp4
 β”‚ β”‚ └── jpg
 β”‚ └── C
 β”‚ β”œβ”€β”€ IMG_0559-frag-s1-c-fps5.mp4
 β”‚ β”œβ”€β”€ jpg
 β”œβ”€β”€ 2
 β”‚ β”œβ”€β”€ A
 β”‚ β”‚ β”œβ”€β”€ IMG_0094-frag-s2-a-fps5.mp4
 β”‚ β”‚ β”œβ”€β”€ jpg
 β”‚ β”œβ”€β”€ B
 β”‚ β”‚ β”œβ”€β”€ IMG_2248-frag-s2-b-fps5.mp4
 β”‚ β”‚ β”œβ”€β”€ jpg
 ...
 β”œβ”€β”€ 12
 β”‚ β”œβ”€β”€ A
 β”‚ β”‚ β”œβ”€β”€ IMG_0104-frag-s12-a-fps5.mp4
 β”‚ β”‚ β”œβ”€β”€ jpg
 β”‚ β”œβ”€β”€ B
 β”‚ β”‚ β”œβ”€β”€ IMG_2254-frag-s12-b-fps5.mp4
 β”‚ β”‚ β”œβ”€β”€ jpg
 β”‚ └── C
 β”‚ β”œβ”€β”€ IMG_0569-frag-s12-c-fps5.mp4
 β”‚ β”œβ”€β”€ jpg
```

### <a id="dow">Dataset Downloads</a>
The whole dataset can be downloaded from [Huggingface](https://huggingface.co/datasets/jellyShuang/MMCT). **Note that each file needs to unzip by the password. You can decompress each `.zip` file in its folder after sending us (2212534@mail.dhu.edu.cn, ytzhang@dhu.edu.cn) the [LICENSE](https://github.com/shengyuhao/DIVOTrack/blob/main/LICENSE.md). in any format.** 


## <a id="ref">Reference</a>
The license agreement for data usage implies the citation of the paper above. Please notice that citing the dataset URL instead of the publications would not be compliant with this license agreement. You can read the LICENSE from [LICENSE](https://github.com/dhu-mmct/DHU-MMCT/blob/main/LICENSE.md).


## <a id="con">Contact</a>
If you have any concerns, please contact [2212534@mail.dhu.edu.cn](2212534@mail.dhu.edu.cn)