Datasets:

Modalities:
Text
Formats:
json
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
YuehHanChen commited on
Commit
46b556a
1 Parent(s): 4274f6b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +24 -12
README.md CHANGED
@@ -1,16 +1,28 @@
1
- ---
2
- license: apache-2.0
3
- tags:
4
- - ChatGPT
5
- ---
6
- <p align="center"><h1>Raw Dataset from Paper Approaching Human-Level Forecasting with Language Models</h1></p>
7
 
8
- We used this raw dataset in our paper, **[Approaching Human-Level Forecasting with Language Models](https://arxiv.org/abs/2402.18563)**.
9
 
10
- Data format. Forecasting platforms such as Metaculus, Good Judgment Open, INFER, Polymarket, and Manifold invite participants to predict future events by assigning probabilities to outcomes of a question.
11
- Each question consists of a background description, resolution criterion, and 3 timestamps: a begin date when the question was published, a close date when no further forecasts can be submitted, and (eventually)
12
- a resolve date when the outcome is determined. A forecast can be submitted between the begin date and min(resolve date, close date). See Table 1 for an example question with these main fields.
 
 
 
 
13
 
 
14
 
15
- In this raw data, we source forecasting questions from the 5 above-mentioned platforms. This yields a total of 48,754 questions and 7,174,607 user forecasts spanning from 2015 to 2024. The dataset includes 33,664 binary
16
- questions, 9,725 multiple-choice questions, 4,019 numerical questions, and 1,346 questions of other types. The questions cover a wide range of topics across the globe.
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <p align="center"><h1>Raw Dataset from "Approaching Human-Level Forecasting with Language Models"</h1></p>
 
 
 
 
 
2
 
3
+ <p align="center">This documentation provides an overview of the raw dataset utilized in our research paper, <strong><a href="https://arxiv.org/abs/2402.18563" target="_blank">Approaching Human-Level Forecasting with Language Models</a></strong>, authored by Danny Halawi, Fred Zhang, Chen Yueh-Han, and Jacob Steinhardt.</p>
4
 
5
+ <h2>Data Source and Format</h2>
6
+ <p>The dataset originates from forecasting platforms such as Metaculus, Good Judgment Open, INFER, Polymarket, and Manifold. These platforms engage users in predicting the likelihood of future events by assigning probabilities to various outcomes. The data structure encompasses:</p>
7
+ <ul>
8
+ <li><strong>Background Description:</strong> Provides context for the forecasting question.</li>
9
+ <li><strong>Resolution Criterion:</strong> Defines how and when the question will be resolved.</li>
10
+ <li><strong>Timestamps:</strong> Includes the publication date (begin date), the forecast submission deadline (close date), and the resolution date (resolve date).</li>
11
+ </ul>
12
 
13
+ <p>Forecasts can be submitted any time between the begin date and the earlier of the resolve date or close date. Refer to <em>Table 1</em> in the paper for a detailed example of these fields in action.</p>
14
 
15
+ <h2>Dataset Composition</h2>
16
+ <p>Our dataset aggregates forecasting questions from the aforementioned platforms, resulting in a comprehensive collection of:</p>
17
+ <ul>
18
+ <li><strong>48,754 Questions:</strong> Spanning from 2015 to 2024.</li>
19
+ <li><strong>7,174,607 User Forecasts:</strong> Offering a rich dataset for analysis.</li>
20
+ <li><strong>Question Types:</strong> Includes 33,664 binary questions, 9,725 multiple-choice questions, 4,019 numerical questions, and 1,346 questions of other types.</li>
21
+ </ul>
22
+
23
+ <p>The questions cover a broad spectrum of topics worldwide, providing a diverse and extensive dataset for forecasting analysis.</p>
24
+
25
+ <h2>Research Significance</h2>
26
+ <p>This dataset plays a crucial role in our study, enabling us to explore the capabilities of language models in forecasting and their potential to achieve human-level performance in predicting future events. By analyzing this vast array of user-generated forecasts, our research aims to shed light on the predictive power and limitations of current language modeling techniques.</p>
27
+
28
+ <p>For more details on our methodology and findings, please refer to our paper linked at the beginning of this document.</p>