Datasets:
hij
/

Modalities:
Text
Formats:
arrow
Languages:
English
ArXiv:
Libraries:
Datasets
License:

The dataset viewer should be available soon. Please retry later.

Dataset Card for RAVEL

A large-scale entity-attribute dataset covering factual, linguistic, and commonsense knowledge.

To load the dataset:

from datasets import load_dataset

dataset = load_dataset("hij/ravel")

Dataset Details

Dataset Description

The RAVEL dataset contains five types of entities, each with at least 500 instances, at least 4 attributes, and at least 50 prompt templates, as shown in the table below.

Entity Type Attributes #Entities #Prompt Templates
City Country, Language, Latitude, Longitude,Timezone, Continent 3552 150
Nobel Laureate Award Year, Birth Year, Country of Birth, Field, Gender 928 100
Verb Definition, Past Tense, Pronunciation, Singular 986 60
Physical Object Biological Category, Color, Size, Texture 563 60
Occupation Duty, Gender Bias, Industry, Work Location 799 50

Compared with existing entity-attribute/relation datasets, such as CounterFact, RAVEL offers two unique features:

  • Multiple attributes per entity to evaluate how well interpretability methods isolate individual concepts
  • x10 more entities per entity type to evaluate how well interpretability methods generalize

Dataset Sources

Uses

The dataset is primarily designed for interpretability research.

Dataset Structure

Each entity type is split into train, val, and test sets. Each example is structured as a dictionary containing the entitiy and attributes.

For example, the entity type city is structured as follows:

DatasetDict({
    train: Dataset({
        features: ['ID', 'City', 'Continent', 'Country', 'Language', 'Latitude', 'Longitude', 'Timezone', 'URL'],
        num_rows: 2041
    })
    validation: Dataset({
        features: ['ID', 'City', 'Continent', 'Country', 'Language', 'Latitude', 'Longitude', 'Timezone', 'URL'],
        num_rows: 970
    })
    test: Dataset({
        features: ['ID', 'City', 'Continent', 'Country', 'Language', 'Latitude', 'Longitude', 'Timezone', 'URL'],
        num_rows: 1126
    })
})

Where each example, i.e., an entity, is structured as follows:

{
  "ID": "2498-0",
  "City": "Seattle",
  "Continent": "North America",
  "Country": "United States",
  "Language": "English",
  "Latitude": "48",
  "Longitude": "-122",
  "Timezone": "America/Los_Angeles",
  "URL": "https://en.wikipedia.org/wiki/Seattle"
}

Citation

BibTeX:

@inproceedings{huang-etal-2024-ravel,
    title = "{RAVEL}: Evaluating Interpretability Methods on Disentangling Language Model Representations",
    author = "Huang, Jing and Wu, Zhengxuan and Potts, Christopher and Geva, Mor and Geiger, Atticus",
    editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek",
    booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
    month = aug,
    year = "2024",
    address = "Bangkok, Thailand",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2024.acl-long.470",
    pages = "8669--8687",
}
Downloads last month
1