File size: 9,056 Bytes
8940e00
 
 
 
 
 
 
 
 
17f8bab
 
 
 
 
 
 
 
 
ac88f33
 
 
 
 
 
14c8050
ac88f33
 
 
 
14c8050
ac88f33
 
 
 
14c8050
ac88f33
 
14c8050
ac88f33
14c8050
ac88f33
 
 
 
14c8050
ac88f33
 
 
 
14c8050
30962a2
ac88f33
8940e00
 
d5c4b1d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
85f3c0c
d5c4b1d
 
 
 
 
30962a2
d5c4b1d
85f3c0c
 
 
d5c4b1d
 
30962a2
 
91c79c3
30962a2
91c79c3
30962a2
d5c4b1d
 
 
 
 
 
 
fe2f834
d5c4b1d
fe2f834
 
 
 
 
 
 
d5c4b1d
fe2f834
 
 
d5c4b1d
fe2f834
 
 
 
 
d5c4b1d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b0fe937
d5c4b1d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
657e605
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
---
task_categories:
- object-detection
license: mit
tags:
- computer vision
- amodal-tracking
- object-tracking
- amodal-perception
configs:
- config_name: default
  data_files:
  - split: train
    path: "amodal_annotations/train.json"
  - split: validation
    path: "amodal_annotations/validation.json"
  - split: test
    path: "amodal_annotations/test.json"
extra_gated_prompt: "To download the AVA and HACS videos you have to agree to terms and conditions."
extra_gated_fields:
  You will use the Datasets only for non-commercial research and educational purposes.:
    type: select
    options: 
      - Yes
      - No
  You will NOT distribute the Datasets or any parts thereof.:
    type: select
    options: 
      - Yes
      - No
  Carnegie Mellon University makes no representations or warranties regarding the datasets, including but not limited to warranties of non-infringement or fitness for a particular purpose.:
    type: select
    options: 
      - Yes
      - No
  You accept full responsibility for your use of the datasets and shall defend and indemnify Carnegie Mellon University, including its employees, officers and agents, against any and all claims arising from your use of the datasets, including but not limited to your use of any copyrighted videos or images that you may create from the datasets.:
    type: select
    options:
      - Yes
      - No
  You will treat people appearing in this data with respect and dignity.:
    type: select
    options: 
      - Yes
      - No
  This data comes with no warranty or guarantee of any kind, and you accept full liability.:
    type: select
    options: 
      - Yes
      - No
extra_gated_heading: "TAO-Amodal VIDEO Request"
extra_gated_button_content: "Request Data"
---

# TAO-Amodal Dataset

<!-- Provide a quick summary of the dataset. -->
 Official Source for Downloading the TAO-Amodal Dataset.
   
   [**πŸ“™ Project Page**](https://tao-amodal.github.io/)  | [**πŸ’» Code**](https://github.com/WesleyHsieh0806/TAO-Amodal) | [**πŸ“Ž Paper Link**](https://arxiv.org/abs/2312.12433) | [**✏️ Citations**](#citations)
   
   <div align="center">
  <a href="https://tao-amodal.github.io/"><img width="95%" alt="TAO-Amodal" src="https://tao-amodal.github.io/static/images/webpage_preview.png"></a>
   </div>

</br>

Contact: [πŸ™‹πŸ»β€β™‚οΈCheng-Yen (Wesley) Hsieh](https://wesleyhsieh0806.github.io/)

## Dataset Description
Our dataset augments the TAO dataset with amodal bounding box annotations for fully invisible, out-of-frame, and occluded objects. 
Note that this implies TAO-Amodal also includes modal segmentation masks (as visualized in the color overlays above). 
Our dataset encompasses 880 categories, aimed at assessing the occlusion reasoning capabilities of current trackers 
through the paradigm of Tracking Any Object with Amodal perception (TAO-Amodal).

### Dataset Download
1. Download with git:
```bash
git lfs install
git clone git@hf.co:datasets/chengyenhsieh/TAO-Amodal
```

- Download with [`python`](https://huggingface.co/docs/huggingface_hub/guides/download#download-files-from-the-hub):

```python
from huggingface_hub import snapshot_download
snapshot_download(repo_id="chengyenhsieh/TAO-Amodal")
```

2. Download all the video frames:

You can either download the frames following the instructions [here](https://motchallenge.net/tao_download.php) (recommended) or modify our provided [script](./download_frames.sh) and run
```bash
bash download_frames.sh
```



## πŸ“š Dataset Structure

The dataset should be structured like this:
```bash
   TAO-Amodal
    β”œβ”€β”€ frames
    β”‚    └── train
    β”‚       β”œβ”€β”€ ArgoVerse
    β”‚       β”œβ”€β”€ BDD
    β”‚       β”œβ”€β”€ Charades
    β”‚       β”œβ”€β”€ HACS
    β”‚       β”œβ”€β”€ LaSOT
    β”‚       └── YFCC100M
    β”œβ”€β”€ amodal_annotations
    β”‚    β”œβ”€β”€ train/validation/test.json
    β”‚    β”œβ”€β”€ train_lvis_v1.json
    β”‚    └── validation_lvis_v1.json
    β”œβ”€β”€ example_output
    β”‚    └── prediction.json
    β”œβ”€β”€ BURST_annotations
    β”‚    β”œβ”€β”€ train
    β”‚         └── train_visibility.json
    β”‚    ...

```

## πŸ“š File Descriptions

| File Name                  | Description                                                                                                                                                                                                                             |
| -------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| train/validation/test.json | Formal annotation files. We use these annotations for visualization. Categories include those in [lvis](https://www.lvisdataset.org/) v0.5 and freeform categories.                                                                     |
| train_lvis_v1.json         | We use this file to train our [amodal-expander](https://tao-amodal.github.io/index.html#Amodal-Expander), treating each image frame as an independent sequence. Categories are aligned with those in lvis v1.0.                         |
| validation_lvis_v1.json    | We use this file to evaluate our [amodal-expander](https://tao-amodal.github.io/index.html#Amodal-Expander). Categories are aligned with those in lvis v1.0.                                                                            |
| prediction.json            | Example output json from amodal-expander. Tracker predictions should be structured like this file to be evaluated with our [evaluation toolkit](https://github.com/WesleyHsieh0806/TAO-Amodal?tab=readme-ov-file#bar_chart-evaluation). |
| BURST_annotations/XXX.json | Modal mask annotations from [BURST dataset](https://github.com/Ali2500/BURST-benchmark) with our heuristic visibility attributes. We provide these files for the convenience of visualization                                           |

### Annotation and Prediction Format

Our annotations are structured similarly as [TAO](https://github.com/TAO-Dataset/tao/blob/master/tao/toolkit/tao/tao.py#L4) with some modifications.
Annotations:
```bash

Annotation file format:
{
    "info" : info,
    "images" : [image],
    "videos": [video],
    "tracks": [track],
    "annotations" : [annotation],
    "categories": [category],
    "licenses" : [license],
}
annotation: {
    "id": int,
    "image_id": int,
    "track_id": int,
    "bbox": [x,y,width,height],
    "area": float,

    # Redundant field for compatibility with COCO scripts
    "category_id": int,
    "video_id": int,

    # Other important attributes for evaluation on TAO-Amodal
    "amodal_bbox": [x,y,width,height],
    "amodal_is_uncertain": bool,
    "visibility": float, (0.~1.0)
}
image, info, video, track, category, licenses, : Same as TAO
```

Predictions should be structured as:

```bash
[{
    "image_id" : int,
    "category_id" : int,
    "bbox" : [x,y,width,height],
    "score" : float,
    "track_id": int,
    "video_id": int
}]
```
Refer to the instructions of [TAO dataset](https://github.com/TAO-Dataset/tao/blob/master/docs/evaluation.md) for further details


## πŸ“Ί Example Sequences
Check [here](https://tao-amodal.github.io/#TAO-Amodal) for more examples and [here](https://github.com/WesleyHsieh0806/TAO-Amodal?tab=readme-ov-file#artist-visualization) for visualization code.
[<img src="https://tao-amodal.github.io/static/images/car_and_bus.png" width="50%">](https://tao-amodal.github.io/dataset.html "tao-amodal")



## Citation 

<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
```
@misc{hsieh2023tracking,
    title={Tracking Any Object Amodally},
    author={Cheng-Yen Hsieh and Tarasha Khurana and Achal Dave and Deva Ramanan},
    year={2023},
    eprint={2312.12433},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}
```

<details>
  <summary>Please also cite <a href="https://taodataset.org/">TAO</a> and <a href="https://github.com/Ali2500/BURST-benchmark">BURST</a> dataset if you use our dataset</summary>

  ```
@inproceedings{dave2020tao,
    title={Tao: A large-scale benchmark for tracking any object},
    author={Dave, Achal and Khurana, Tarasha and Tokmakov, Pavel and Schmid, Cordelia and Ramanan, Deva},
    booktitle={Computer Vision--ECCV 2020: 16th European Conference, Glasgow, UK, August 23--28, 2020, Proceedings, Part V 16},
    pages={436--454},
    year={2020},
    organization={Springer}
  }

@inproceedings{athar2023burst,
  title={Burst: A benchmark for unifying object recognition, segmentation and tracking in video},
  author={Athar, Ali and Luiten, Jonathon and Voigtlaender, Paul and Khurana, Tarasha and Dave, Achal and Leibe, Bastian and Ramanan, Deva},
  booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision},
  pages={1674--1683},
  year={2023}
}
  ```

</details>