ravel / README.md
hij's picture
Upload folder using huggingface_hub
3684f3f verified
metadata
configs:
  - config_name: city_entity
    data_files:
      - split: train
        path: city_entity/train*.parquet
      - split: val
        path: city_entity/val*.parquet
      - split: test
        path: city_entity/test*.parquet
  - config_name: city_prompt
    data_files:
      - split: train
        path: city_prompt/train*.parquet
      - split: val
        path: city_prompt/val*.parquet
      - split: test
        path: city_prompt/test*.parquet
license: mit
task_categories:
  - text-generation
  - question-answering
language:
  - en

Dataset Card for RAVEL

A large-scale entity-attribute dataset covering factual, linguistic, and commonsense knowledge.

To load the dataset:

from datasets import load_dataset

dataset = load_dataset("hij/ravel")

Dataset Details

Dataset Description

The RAVEL dataset contains five types of entities, each with at least 500 instances, at least 4 attributes, and at least 50 prompt templates, as shown in the table below.

Entity Type Attributes #Entities #Prompt Templates
City Country, Language, Latitude, Longitude,Timezone, Continent 3552 150
Nobel Laureate Award Year, Birth Year, Country of Birth, Field, Gender 928 100
Verb Definition, Past Tense, Pronunciation, Singular 986 60
Physical Object Biological Category, Color, Size, Texture 563 60
Occupation Duty, Gender Bias, Industry, Work Location 799 50

Compared with existing entity-attribute/relation datasets, such as CounterFact, RAVEL offers two unique features:

  • Multiple attributes per entity to evaluate how well interpretability methods isolate individual concepts
  • x10 more entities per entity type to evaluate how well interpretability methods generalize

Dataset Sources

Uses

The dataset is primarily designed for interpretability research.

Dataset Structure

Each entity type is associated with two subsets: entities and prompt templates. Both the entities and the prompts are split into train, val, and test.

Entity

For the entity subset, each example is structured as a dictionary containing the entitiy and attributes. An additional ID field is used to disambiguate entities.

For example, the entity type city is structured as follows:

DatasetDict({
    train: Dataset({
        features: ['ID', 'City', 'Continent', 'Country', 'Language', 'Latitude', 'Longitude', 'Timezone', 'URL'],
        num_rows: 2041
    })
    validation: Dataset({
        features: ['ID', 'City', 'Continent', 'Country', 'Language', 'Latitude', 'Longitude', 'Timezone', 'URL'],
        num_rows: 970
    })
    test: Dataset({
        features: ['ID', 'City', 'Continent', 'Country', 'Language', 'Latitude', 'Longitude', 'Timezone', 'URL'],
        num_rows: 1126
    })
})

Each example, i.e., an entity, is structured as follows:

{
  "ID": "2498-0",
  "City": "Seattle",
  "Continent": "North America",
  "Country": "United States",
  "Language": "English",
  "Latitude": "48",
  "Longitude": "-122",
  "Timezone": "America/Los_Angeles",
  "URL": "https://en.wikipedia.org/wiki/Seattle"
}

Prompt

The prompt subset contains the prompt templates, which attribute the template is querying, whether this template comes from RAVEL or Wikipedia, and which entities can be used for this template.

An empty string in the Attribute field means this prompt is not querying for a specific attribute.

An empty string in the Entity field means this prompt can be used with all the entities of the given type.

For example, the prompt templates for city are structured as follows:

DatasetDict({
    train: Dataset({
        features: ['Template', 'Attribute', 'Source', 'Entity'],
        num_rows: 442
    })
    val: Dataset({
        features: ['Template', 'Attribute', 'Source', 'Entity'],
        num_rows: 397
    })
    test: Dataset({
        features: ['Template', 'Attribute', 'Source', 'Entity'],
        num_rows: 372
    })
})

Each example, i.e., a prompt template, is structured as follows:

{
  'Template': '%s is a city in the country of',
  'Attribute': 'Country',
  'Source': 'RAVEL',
  'Entity': ''
}

Citation

BibTeX:

@inproceedings{huang-etal-2024-ravel,
    title = "{RAVEL}: Evaluating Interpretability Methods on Disentangling Language Model Representations",
    author = "Huang, Jing and Wu, Zhengxuan and Potts, Christopher and Geva, Mor and Geiger, Atticus",
    editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek",
    booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
    month = aug,
    year = "2024",
    address = "Bangkok, Thailand",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2024.acl-long.470",
    pages = "8669--8687",
}