azhx's picture
Update README.md
f32f498
metadata
dataset_info:
  features:
    - name: subject
      dtype: string
    - name: proposition
      dtype: string
    - name: subject+predicate
      dtype: string
    - name: answer
      dtype: string
    - name: label
      dtype:
        class_label:
          names:
            '0': 'False'
            '1': 'True'
    - name: case_id
      dtype: int64
  splits:
    - name: train
      num_bytes: 915160.9417906551
      num_examples: 6896
    - name: test
      num_bytes: 101655.05820934482
      num_examples: 766
  download_size: 421630
  dataset_size: 1016816

Dataset Card for "counterfact-filtered-gptj6b"

This dataset is a subset of azhx/counterfact-easy, however it was filtered based on a heuristic that was used to determine whether the knowledge in each row is actually known by the GPT-J-6B model

The heuristic is as follows:

For each prompt in the original counterfact dataset used by ROME, we use GPT-J-6B to generate n=5 completions to a max generated token length of 30. If the completion contains the answer that is specified in the dataset for the majority of the completions (>=3), then we conclude that the model does indeed know this fact.

In practice, we find that many of the prompts in the original dataset cannot be answered accurately a lot of the time using GPT-J-6B. The number of case_ids were filtered from ~21k to about ~3k.