Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
hotpot_qa / README.md
mehuldamani's picture
Update README.md
4144376 verified
metadata
dataset_info:
  features:
    - name: id
      dtype: string
    - name: answer
      dtype: string
    - name: type
      dtype: string
    - name: level
      dtype: string
    - name: supporting_facts
      sequence:
        - name: title
          dtype: string
        - name: sent_id
          dtype: int32
    - name: context
      sequence:
        - name: title
          dtype: string
        - name: sentences
          sequence: string
    - name: problem
      dtype: string
    - name: source
      dtype: string
    - name: gold_removed
      dtype: int64
    - name: removed_titles
      sequence: string
  splits:
    - name: train
      num_bytes: 226320070.31742346
      num_examples: 20000
    - name: test
      num_bytes: 5716501.080351114
      num_examples: 500
  download_size: 116725004
  dataset_size: 232036571.39777458
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: test
        path: data/test-*

This dataset is a modified version of the HotPotQA distractor dataset, which contains factual questions requiring multi-hop reasoning. In the original HotPotQA dataset, each example presents ten paragraphs, only two of which contain the information necessary to answer the question; the remaining eight paragraphs include closely related but irrelevant details. Consequently, solving this task requires the model to identify and reason over the pertinent passages. To more strongly develop uncertainty reasoning capability, we construct HotPotQA-Modified, in which we systematically remove either 0, 1, or both of the key paragraphs required to answer each question. This modification introduces varying levels of informational completeness that the model must reason over. The "gold_removed" column indicates the number of relevant paragraphs removed (0,1 or 2). Questions are distributed across three equal groups: one-third have no relevant paragraphs (0/8), one-third have 1 relevant paragraph (1/7), and one-third have both relevant paragraphs (2/6). Each question consistently contains 8 total paragraphs.

To cite the original HotPotQA dataset:

@article{yang2018hotpotqa,
  title={HotpotQA: A dataset for diverse, explainable multi-hop question answering},
  author={Yang, Zhilin and Qi, Peng and Zhang, Saizheng and Bengio, Yoshua and Cohen, William W and Salakhutdinov, Ruslan and Manning, Christopher D},
  journal={arXiv preprint arXiv:1809.09600},
  year={2018}
}

To cite our modified dataset:

@article{damani2025beyond,
  title={Beyond Binary Rewards: Training LMs to Reason About Their Uncertainty},
  author={Damani, Mehul and Puri, Isha and Slocum, Stewart and Shenfeld, Idan and Choshen, Leshem and Kim, Yoon and Andreas, Jacob},
  journal={arXiv preprint arXiv:2507.16806},
  year={2025}
}