Datasets:
minkyuchoi
commited on
Commit
•
84e521e
1
Parent(s):
8c6c247
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,205 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Temporal Logic Video (TLV) Dataset
|
2 |
+
## Overview
|
3 |
+
<!-- PROJECT LOGO -->
|
4 |
+
<br />
|
5 |
+
<div align="center">
|
6 |
+
<a href="https://github.com/UTAustin-SwarmLab/temporal-logic-video-dataset">
|
7 |
+
<img src="images/logo.png" alt="Logo" width="240" height="240">
|
8 |
+
</a>
|
9 |
+
|
10 |
+
<h3 align="center">Temporal Logic Video (TLV) Dataset</h3>
|
11 |
+
|
12 |
+
<p align="center">
|
13 |
+
Synthetic and real video dataset with temporal logic annotation
|
14 |
+
<br />
|
15 |
+
<a href="https://github.com/UTAustin-SwarmLab/temporal-logic-video-dataset"><strong>Explore the docs »</strong></a>
|
16 |
+
<br />
|
17 |
+
<br />
|
18 |
+
<a href="https://anoymousu1.github.io/nsvs-anonymous.github.io/">NSVS-TL Project Webpage</a>
|
19 |
+
·
|
20 |
+
<a href="https://github.com/UTAustin-SwarmLab/Neuro-Symbolic-Video-Search-Temploral-Logic">NSVS-TL Source Code</a>
|
21 |
+
</p>
|
22 |
+
</div>
|
23 |
+
|
24 |
+
## Overview
|
25 |
+
|
26 |
+
The Temporal Logic Video (TLV) Dataset addresses the scarcity of state-of-the-art video datasets for long-horizon, temporally extended activity and object detection. It comprises two main components:
|
27 |
+
|
28 |
+
1. Synthetic datasets: Generated by concatenating static images from established computer vision datasets (COCO and ImageNet), allowing for the introduction of a wide range of Temporal Logic (TL) specifications.
|
29 |
+
2. Real-world datasets: Based on open-source autonomous vehicle (AV) driving datasets, specifically NuScenes and Waymo.
|
30 |
+
|
31 |
+
## Table of Contents
|
32 |
+
|
33 |
+
- [Dataset Composition](#dataset-composition)
|
34 |
+
- [Dataset (Release)](#dataset)
|
35 |
+
- [Installation](#installation)
|
36 |
+
- [Usage](#usage)
|
37 |
+
- [Data Generation](#data-generation)
|
38 |
+
- [Contribution Guidelines](#contribution-guidelines)
|
39 |
+
- [License](#license)
|
40 |
+
- [Acknowledgments](#acknowledgments)
|
41 |
+
|
42 |
+
## Dataset Composition
|
43 |
+
|
44 |
+
### Synthetic Datasets
|
45 |
+
- Source: COCO and ImageNet
|
46 |
+
- Purpose: Introduce artificial Temporal Logic specifications
|
47 |
+
- Generation Method: Image stitching from static datasets
|
48 |
+
|
49 |
+
### Real-world Datasets
|
50 |
+
- Sources: NuScenes and Waymo
|
51 |
+
- Purpose: Provide real-world autonomous vehicle scenarios
|
52 |
+
- Annotation: Temporal Logic specifications added to existing data
|
53 |
+
|
54 |
+
## Dataset
|
55 |
+
<div align="center">
|
56 |
+
<a href="https://github.com/UTAustin-SwarmLab/temporal-logic-video-dataset">
|
57 |
+
<img src="images/teaser.png" alt="Logo" width="840" height="440">
|
58 |
+
</a>
|
59 |
+
</div>
|
60 |
+
|
61 |
+
Though we provide a source code to generate datasets from different types of data sources, we release a dataset v1 as a proof of concept.
|
62 |
+
|
63 |
+
### Dataset Structure
|
64 |
+
|
65 |
+
We provide a v1 dataset as a proof of concept. The data is offered as serialized objects, each containing a set of frames with annotations.
|
66 |
+
|
67 |
+
#### File Naming Convention
|
68 |
+
`\<tlv_data_type\>:source:\<datasource\>-number_of_frames:\<number_of_frames\>-\<uuid\>.pkl`
|
69 |
+
|
70 |
+
#### Object Attributes
|
71 |
+
Each serialized object contains the following attributes:
|
72 |
+
- `ground_truth`: Boolean indicating whether the dataset contains ground truth labels
|
73 |
+
- `ltl_formula`: Temporal logic formula applied to the dataset
|
74 |
+
- `proposition`: A set of proposition for ltl_formula
|
75 |
+
- `number_of_frame`: Total number of frames in the dataset
|
76 |
+
- `frames_of_interest`: Frames of interest which satisfy the ltl_formula
|
77 |
+
- `labels_of_frames`: Labels for each frame
|
78 |
+
- `images_of_frames`: Image data for each frame
|
79 |
+
|
80 |
+
You can download a dataset from here. The structure of dataset is as follows: serializer
|
81 |
+
```
|
82 |
+
tlv-dataset-v1/
|
83 |
+
├── tlv_real_dataset/
|
84 |
+
├──── prop1Uprop2/
|
85 |
+
├──── (prop1&prop2)Uprop3/
|
86 |
+
├── tlv_synthetic_dataset/
|
87 |
+
├──── Fprop1/
|
88 |
+
├──── Gprop1/
|
89 |
+
├──── prop1&prop2/
|
90 |
+
├──── prop1Uprop2/
|
91 |
+
└──── (prop1&prop2)Uprop3/
|
92 |
+
```
|
93 |
+
#### Dataset Statistics
|
94 |
+
1. Total Number of Frames
|
95 |
+
|
96 |
+
| Ground Truth TL Specifications | Synthetic TLV Dataset | | Real TLV Dataset | |
|
97 |
+
| --- | ---: | ---: | ---: | ---: |
|
98 |
+
| | COCO | ImageNet | Waymo | Nuscenes |
|
99 |
+
| Eventually Event A | - | 15,750 | - | - |
|
100 |
+
| Always Event A | - | 15,750 | - | - |
|
101 |
+
| Event A And Event B | 31,500 | - | - | - |
|
102 |
+
| Event A Until Event B | 15,750 | 15,750 | 8,736 | 19,808 |
|
103 |
+
| (Event A And Event B) Until Event C | 5,789 | - | 7,459 | 7,459 |
|
104 |
+
|
105 |
+
2. Total Number of datasets
|
106 |
+
|
107 |
+
| Ground Truth TL Specifications | Synthetic TLV Dataset | | Real TLV Dataset | |
|
108 |
+
| --- | ---: | ---: | ---: | ---: |
|
109 |
+
| | COCO | ImageNet | Waymo | Nuscenes |
|
110 |
+
| Eventually Event A | - | 60 | - | - |
|
111 |
+
| Always Event A | - | 60 | - | - |
|
112 |
+
| Event A And Event B | 120 | - | - | - |
|
113 |
+
| Event A Until Event B | 60| 60 | 45| 494 |
|
114 |
+
| (Event A And Event B) Until Event C | 97 | - | 30 | 186|
|
115 |
+
|
116 |
+
|
117 |
+
## Installation
|
118 |
+
|
119 |
+
```bash
|
120 |
+
python -m venv .venv
|
121 |
+
source .venv/bin/activate
|
122 |
+
python -m pip install --upgrade pip build
|
123 |
+
python -m pip install --editable ."[dev, test]"
|
124 |
+
```
|
125 |
+
|
126 |
+
### Prerequisites
|
127 |
+
|
128 |
+
1. ImageNet (ILSVRC 2017):
|
129 |
+
```
|
130 |
+
ILSVRC/
|
131 |
+
├── Annotations/
|
132 |
+
├── Data/
|
133 |
+
├── ImageSets/
|
134 |
+
└── LOC_synset_mapping.txt
|
135 |
+
```
|
136 |
+
|
137 |
+
2. COCO (2017):
|
138 |
+
```
|
139 |
+
COCO/
|
140 |
+
└── 2017/
|
141 |
+
├── annotations/
|
142 |
+
├── train2017/
|
143 |
+
└── val2017/
|
144 |
+
```
|
145 |
+
|
146 |
+
## Usage
|
147 |
+
|
148 |
+
Detailed usage instructions for data loading and processing.
|
149 |
+
|
150 |
+
### Data Loader Configuration
|
151 |
+
|
152 |
+
- `data_root_dir`: Root directory of the dataset
|
153 |
+
- `mapping_to`: Label mapping scheme (default: "coco")
|
154 |
+
- `save_dir`: Output directory for processed data
|
155 |
+
|
156 |
+
### Synthetic Data Generator Configuration
|
157 |
+
|
158 |
+
- `initial_number_of_frame`: Starting frame count per video
|
159 |
+
- `max_number_frame`: Maximum frame count per video
|
160 |
+
- `number_video_per_set_of_frame`: Videos to generate per frame set
|
161 |
+
- `increase_rate`: Frame count increment rate
|
162 |
+
- `ltl_logic`: Temporal Logic specification (e.g., "F prop1", "G prop1")
|
163 |
+
- `save_images`: Boolean flag for saving individual frames
|
164 |
+
|
165 |
+
## Data Generation
|
166 |
+
|
167 |
+
### COCO Synthetic Data Generation
|
168 |
+
|
169 |
+
```bash
|
170 |
+
python3 run_scripts/run_synthetic_tlv_coco.py --data_root_dir "../COCO/2017" --save_dir "<output_dir>"
|
171 |
+
```
|
172 |
+
|
173 |
+
### ImageNet Synthetic Data Generation
|
174 |
+
|
175 |
+
```bash
|
176 |
+
python3 run_synthetic_tlv_imagenet.py --data_root_dir "../ILSVRC" --save_dir "<output_dir>"
|
177 |
+
```
|
178 |
+
|
179 |
+
Note: ImageNet generator does not support '&' LTL logic formulae.
|
180 |
+
|
181 |
+
## License
|
182 |
+
|
183 |
+
This project is licensed under the MIT License. See the [LICENSE](LICENSE) file for details.
|
184 |
+
|
185 |
+
## Citation
|
186 |
+
If you find this repo useful, please cite our paper:
|
187 |
+
```bibtex
|
188 |
+
@inproceedings{Choi_2024_ECCV,
|
189 |
+
author={Choi, Minkyu and Goel Harsh and Omama, Mohammad and Yang, Yunhao and Shah, Sahil and Chinchali and Sandeep},
|
190 |
+
title={Towards Neuro-Symbolic Video Understanding},
|
191 |
+
booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
|
192 |
+
month={September},
|
193 |
+
year={2024}
|
194 |
+
}
|
195 |
+
```
|
196 |
+
|
197 |
+
|
198 |
+
[contributors-shield]: https://img.shields.io/github/contributors/UTAustin-SwarmLab/temporal-logic-video-dataset.svg?style=for-the-badge
|
199 |
+
[contributors-url]: https://github.com/UTAustin-SwarmLab/temporal-logic-video-dataset/graphs/contributors
|
200 |
+
[forks-shield]: https://img.shields.io/github/forks/UTAustin-SwarmLab/temporal-logic-video-dataset.svg?style=for-the-badge
|
201 |
+
[forks-url]: https://github.com/UTAustin-SwarmLab/temporal-logic-video-dataset/network/members
|
202 |
+
[stars-shield]: https://img.shields.io/github/stars/UTAustin-SwarmLab/temporal-logic-video-dataset.svg?style=for-the-badge
|
203 |
+
[stars-url]: https://github.com/UTAustin-SwarmLab/temporal-logic-video-dataset/stargazers
|
204 |
+
[license-shield]: https://img.shields.io/github/license/UTAustin-SwarmLab/temporal-logic-video-dataset.svg?style=for-the-badge
|
205 |
+
[license-url]: https://github.com/UTAustin-SwarmLab/temporal-logic-video-dataset/blob/master/LICENSE.txt
|