Datasets:

Modalities:
Text
Formats:
json
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
forecasting_raw / README.md
YuehHanChen's picture
Librarian Bot: Add language metadata for dataset (#1)
df9ba37 verified
metadata
language:
  - en
license: apache-2.0

Raw Dataset from "Approaching Human-Level Forecasting with Language Models"

This documentation provides an overview of the raw dataset utilized in our research paper, Approaching Human-Level Forecasting with Language Models, authored by Danny Halawi, Fred Zhang, Chen Yueh-Han, and Jacob Steinhardt.

Data Source and Format

The dataset originates from forecasting platforms such as Metaculus, Good Judgment Open, INFER, Polymarket, and Manifold. These platforms engage users in predicting the likelihood of future events by assigning probabilities to various outcomes. The data structure encompasses:

  • Background Description: Provides context for the forecasting question.
  • Resolution Criterion: Defines how and when the question will be resolved.
  • Timestamps: Includes the publication date (begin date), the forecast submission deadline (close date), and the resolution date (resolve date).

Forecasts can be submitted any time between the begin date and the earlier of the resolve date or close date. Refer to Table 1 in the paper for a detailed example of these fields in action.

Dataset Composition

Our dataset aggregates forecasting questions from the aforementioned platforms, resulting in a comprehensive collection of:

  • 50,343 Questions: Spanning from 2015 to 2024.
  • 6,534,042 User Forecasts: Offering a rich dataset for analysis.
  • Question Types: Includes 33,664 binary questions, 9,725 multiple-choice questions, 4,019 numerical questions, and 1,346 questions of other types.

The questions cover a broad spectrum of topics worldwide, providing a diverse and extensive dataset for forecasting analysis.

Research Significance

This dataset plays a crucial role in our study, enabling us to explore the capabilities of language models in forecasting and their potential to achieve human-level performance in predicting future events.

For more details on our methodology and findings, please refer to our paper linked at the beginning of this document.

How to Cite

If you find our dataset and research useful for your work, please cite it using the following BibTeX entry:

@misc{halawi2024approaching,
      title={Approaching Human-Level Forecasting with Language Models}, 
      author={Danny Halawi and Fred Zhang and Chen Yueh-Han and Jacob Steinhardt},
      year={2024},
      eprint={2402.18563},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}