SpatialEvalLLM / README.md
yyamada's picture
Update README.md
6108c6c verified
metadata
license: mit

SpatialEvalLLM Dataset

Overview

The SpatialEvalLLM dataset is a collection of prompts with natural language descriptions of different geometries, including square grid (with variation of rhombus grid), rectangle grid, hexagonal grid, triangular grid, tree structure, and ring structure with different sizes. The dataset is designed for evaluating the performance of language models in navigating these spatial structures. Each prompt provides instructions regarding the starting point within the structure and directions for movement, along with the number of steps to take. The dataset facilitates benchmarking and assessing the ability of language models to understand and navigate spatial configurations.

Contents

The dataset contains the following components:

Folders:

map_global: Contains prompts where every prompt starts with a description of the entire structure. map_local: Contains prompts where only partial information of the structure is given.

File Naming Convention:

All files follow the format: type-[type]_size-[size]_steps-[steps]_seed-[seed]_n-[n]. type: Type of structure described in the prompts of the file. size: Size of the structure. steps: Number of navigation steps. seed: Random seed number. n: Number of prompts in the file.

Prompt Structure:

Every prompt has two keys: "question" and "answer". "question": The prompt itself. "answer": The ground truth for the prompt. Example File: type-ring_size-12_steps-8_seed-12_n-100 represents a file with 100 prompts of a ring structure with 12 nodes, 8 navigation steps asked to perform, generated using random seed 12.

Usage

Researchers and developers working in natural language processing (NLP), machine learning, and artificial intelligence (AI) can utilize the SpatialEvalLLM dataset for:

  • Training and evaluating large language models (LLMs) on spatial reasoning and navigation tasks.
  • Benchmarking the performance of different NLP models in understanding and following spatial instructions.
  • Investigating the capabilities and limitations of LLMs in navigating diverse spatial configurations.

If you wish to reproduce the dataset or generate more prompts with different sizes and navigation steps, the code used for data generation is available for download at https://github.com/runopti/SpatialEvalLLM, with instructions. You can use this code to generate custom prompts according to your specific requirements.

Citation

If you use the SpatialEvalLLM dataset in your work, please cite the following paper:

@article{yamada2023evaluating,
title={Evaluating Spatial Understanding of Large Language Models},
author={Yamada, Yutaro and Bao, Yihan and Lampinen, Andrew K and Kasai, Jungo and Yildirim, Ilker},
journal={Transactions on Machine Learning Research},
year={2024}
}

Contact

For any inquiries or issues regarding the dataset, please contact [yutaro.yamada@yale.edu, yihan.bao@yale.edu ].