PersReFex / README.md
ZinengTang's picture
Add validation dataset with images
bea3652 verified
metadata
language: en
license: mit
size_categories:
  - 1K<n<10K
task_categories:
  - image-to-text
  - visual-question-answering
tags:
  - spatial
  - dialogue
  - visual-grounding
dataset_info:
  features:
    - name: instance_id
      dtype: int32
    - name: scene_key
      dtype: string
    - name: listener_view_image
      dtype: image
    - name: speaker_view_image
      dtype: image
    - name: human_speaker_message
      dtype: string
    - name: speaker_elapsed_time
      dtype: float32
    - name: positions
      dtype: string
    - name: listener_target_bbox
      dtype: string
    - name: listener_distractor_0_bbox
      dtype: string
    - name: listener_distractor_1_bbox
      dtype: string
    - name: speaker_target_bbox
      dtype: string
    - name: speaker_distractor_0_bbox
      dtype: string
    - name: speaker_distractor_1_bbox
      dtype: string
    - name: human_listener_message
      dtype: string
    - name: listener_elapsed_time
      dtype: float32
    - name: type
      dtype: string
  splits:
    - name: validation
      num_bytes: 6561385869
      num_examples: 2970
  download_size: 6419735381
  dataset_size: 6561385869
configs:
  - config_name: default
    data_files:
      - split: validation
        path: data/validation-*

Dataset Card for Multi-Agent Referential Communication Dataset

Example scene

Example scene showing the speaker (left) and listener (right) views.

Dataset Details

Dataset Description

This dataset contains spatial dialogue data for multi-agent referential communication tasks in 3D environments. It includes pairs of images showing speaker and listener views within photorealistic indoor scenes, along with natural language descriptions of target object locations.

The key feature of this dataset is that it captures communication between two agents with different physical perspectives in a shared 3D space. Each agent has their own unique viewpoint of the scene, requiring them to consider each other's perspectives when generating and interpreting spatial references.

Dataset Summary

  • Size: 2,970 dialogue instances across 1,485 scenes
  • Total Scenes Generated: 27,504 scenes (24,644 train, 1,485 validation, 1,375 test)
  • Task Type: Referential communication between embodied agents
  • Language(s): English
  • License: MIT
  • Curated by: University of California, Berkeley
  • Time per Task: Median 33.0s for speakers, 10.5s for listeners

Dataset Structure

Each instance contains:

  • Speaker view image (1024x1024 resolution)
  • Listener view image (1024x1024 resolution)
  • Natural language referring expression from human speaker
  • Target object location
  • Listener object selection
  • Scene metadata including:
    • Agent positions and orientations
    • Referent placement method (random vs adversarial)
    • Base environment identifier

Dataset Creation

  1. Base environments from ScanNet++ (450 high-quality 3D indoor environments)
  2. Scene generation process:
    • Place two agents with controlled relative orientations (0° to 180°)
    • Place 3 referent objects using either random or adversarial placement
    • Render images from each agent's perspective
    • Apply quality filtering using GPT-4V

Citation

BibTeX:

@article{tang2024grounding,
  title={Grounding Language in Multi-Perspective Referential Communication},
  author={Tang, Zineng and Mao, Lingjun and Suhr, Alane},
  journal={EMNLP},
  year={2024}
}

Dataset Card Contact

Contact the authors at terran@berkeley.edu