Datasets:
license: cc-by-4.0
task_categories:
- image-classification
- image-to-video
language:
- en
tags:
- self-supervised learning
- representation learning
pretty_name: Walking_Tours
size_categories:
- n<1K
Walking Tours Dataset
Overview
The Walking Tours dataset is a unique collection of long-duration egocentric videos captured in urban environments from cities in Europe and Asia. It consists of 10 high-resolution videos, each showcasing a person walking through a different environment, ranging from city centers to parks to residential areas, under different lighting conditions. A video from a Wildlife safari is also included to diversify the dataset with natural environments. The dataset is completely unlabeled and uncurated, making it suitable for self-supervised pretraining.
Cities Covered
The dataset encompasses walks through the following cities:
- Amsterdam
- Bangkok
- Chiang Mai
- Istanbul
- Kuala Lumpur
- Singapore
- Stockholm
- Venice
- Zurich
Video Specifications
- Resolution: 4K (3840 × 2160 pixels)
- Frame Rate: 60 frames-per-second
- License: Creative Commons License (CC-BY)
Duration
The videos vary in duration, offering a diverse range of content:
- Minimum Duration: 59 minutes (Wildlife safari)
- Maximum Duration: 2 hours 55 minutes (Bangkok)
- Average Duration: 1 hour 38 minutes
Download the Dataset
The complete list of WTour videos are available in WTour.txt
, comprising the YouTube link and the corresponding city.
To download the dataset, we first install pytube
pip install pytube
then, we run
python download_WTours.py --output_folder <path_to_folder>
In order to comply with GDPR, we also try to blur out all faces and license plates appearing in the video using Deface
To do this for all videos in WTour dataset:
python3 -m pip install deface
Then run Deface on all videos using the bash script:
chmod a+x gdpr_blur_faces.sh
./gdpr_blur_faces.sh
Citation
If you find this work useful and use it on your own research, please cite our paper:
@inproceedings{venkataramanan2023imagenet,
title={Is ImageNet worth 1 video? Learning strong image encoders from 1 long unlabelled video},
author={Venkataramanan, Shashanka and Rizve, Mamshad Nayeem and Carreira, Jo{\~a}o and Asano, Yuki M and Avrithis, Yannis},
booktitle={International Conference on Learning Representations},
year={2024}
}