--- license: cc-by-4.0 task_categories: - image-classification language: - en tags: - vlol - v-lol - visual logical learning - reasoning - visual reasoning - logical reasoning - ILP - Symbolic AI - logic - Inductive logic programming pretty_name: 'V-LoL: A Diagnostic Dataset for Visual Logical Learning' size_categories: - 10K, 'label': 1 } ``` ### Data Fields The data instances have the following fields: - image: A PIL.Image.Image object containing the image. Note that when accessing the image column: dataset[0]["image"] the image file is automatically decoded Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the "image" column, i.e. dataset[0]["image"] should always be preferred over dataset["image"][0]. - label: an int classification label. Class labels mapping: | ID | Class | | --- | ----------- | | 0 | Westbound | | 1 | Eastbound | ### Data Splits See tasks. ## Dataset Creation ### Curation Rationale Despite the successes of recent developments in visual AI, different shortcomings still exist; from missing exact logical reasoning, to abstract generalization abilities, to understanding complex and noisy scenes. Unfortunately, existing benchmarks, were not designed to capture more than a few of these aspects. Whereas deep learning datasets focus on visually complex data but simple visual reasoning tasks, inductive logic datasets involve complex logical learning tasks, however, lack the visual component. To address this, we propose the visual logical learning dataset, V-LoL, that seamlessly combines visual and logical challenges. Notably, we introduce the first instantiation of V-LoL, V-LoL-Train, -- a visual rendition of a classic benchmark in symbolic AI, the Michalski train problem. By incorporating intricate visual scenes and flexible logical reasoning tasks within a versatile framework, V-LoL-Train provides a platform for investigating a wide range of visual logical learning challenges. To create new V-LoL challenges, we provide a comprehensive guide and resources in our [GitHub repository](https://github.com/ml-research/vlol-dataset-gen). The repository offers a collection of tools and code that enable researchers and practitioners to easily generate new V-LoL challenges based on their specific requirements. By referring to our GitHub repository, users can access the necessary documentation, code samples, and instructions to create and customize their own V-LoL challenges. ### Source Data #### Initial Data Collection and Normalization The individual datasets are generated using the V-LoL-Train generator. See [GitHub repository](https://github.com/ml-research/vlol-dataset-gen). #### Who are the source language producers? See [GitHub repository](https://github.com/ml-research/vlol-dataset-gen). ### Annotations #### Annotation process The images are generated in two steps: first sampling a valid symbolic representation of a train and then visualizing it within a 3D scene. #### Who are the annotators? Annotations are automatically derived using a python, prolog, and blender pipline. See [GitHub repository](https://github.com/ml-research/vlol-dataset-gen). ### Personal and Sensitive Information The dataset does not contain personal nor sensitive information. ## Considerations for Using the Data ### Social Impact of Dataset The dataset has no social impact. ### Discussion of Biases Please refer to our paper. ### Other Known Limitations Please refer to our paper. ## Additional Information ### Dataset Curators Lukas Helff ### Licensing Information MIT License ### Citation Information @misc{helff2023vlol, title={V-LoL: A Diagnostic Dataset for Visual Logical Learning}, author={Lukas Helff and Wolfgang Stammer and Hikaru Shindo and Devendra Singh Dhami and Kristian Kersting}, journal={Dataset available from https://sites.google.com/view/v-lol}, year={2023}, eprint={2306.07743}, archivePrefix={arXiv}, primaryClass={cs.AI} } ### Contributions Lukas Helff, Wolfgang Stammer, Hikaru Shindo, Devendra Singh Dhami, Kristian Kersting