nielsr HF Staff commited on
Commit
916abb2
·
verified ·
1 Parent(s): c56a2c7

Improve dataset card: add paper link, project page, and task category

Browse files

Hi there! I'm Niels from the community science team at Hugging Face.

I've updated the dataset card to include:
- A link to the paper "TFRBench: A Reasoning Benchmark for Evaluating Forecasting Systems".
- A link to the official project page.
- The appropriate task category (`time-series-forecasting`) in the YAML metadata.
- A brief description of the benchmark's purpose to help researchers understand the dataset.

This will make the dataset more discoverable and provide proper attribution to the authors.

Files changed (1) hide show
  1. README.md +16 -5
README.md CHANGED
@@ -1,6 +1,13 @@
1
- # TFRBench Submission Guidelines
 
 
 
2
 
3
- Thank you for your interest in TFRBench! To participate in the leaderboard, please follow the directory structure and schema below to format your model predictions.
 
 
 
 
4
 
5
  ## How to Download the Data
6
 
@@ -12,6 +19,10 @@ from huggingface_hub import snapshot_download
12
  snapshot_download(repo_id="AtikAhamed/TFRBench", repo_type="dataset", local_dir="./my_local_data")
13
  ```
14
 
 
 
 
 
15
  ## Public Inputs (What you receive)
16
 
17
  You will be provided with public input JSON files. Each file is a list of objects containing historical data and the timestamps for which you need to predict.
@@ -93,6 +104,6 @@ Each JSON file must be a list of objects. Each object represents a prediction fo
93
 
94
  ### Required Fields:
95
 
96
- - `id` (String): The unique identifier for the sample (must match the ID provided in public inputs).
97
- - `Reasoning` (String): The text explanation generated by your model.
98
- - `Prediction` (List of Lists): A 2D numerical array representing the forecast window. For single-channel datasets, use `[[value]]` per time step.
 
1
+ ---
2
+ task_categories:
3
+ - time-series-forecasting
4
+ ---
5
 
6
+ # TFRBench: A Reasoning Benchmark for Evaluating Forecasting Systems
7
+
8
+ [Paper](https://huggingface.co/papers/2604.05364) | [Project Page](https://tfrbench.github.io/)
9
+
10
+ TFRBench is the first benchmark designed to evaluate the reasoning capabilities of forecasting systems. While traditional time-series forecasting evaluations focus solely on numerical accuracy, TFRBench provides a protocol for evaluating the reasoning generated by models—specifically their analysis of cross-channel dependencies, trends, and external events. The benchmark spans ten datasets across five diverse domains.
11
 
12
  ## How to Download the Data
13
 
 
19
  snapshot_download(repo_id="AtikAhamed/TFRBench", repo_type="dataset", local_dir="./my_local_data")
20
  ```
21
 
22
+ # TFRBench Submission Guidelines
23
+
24
+ Thank you for your interest in TFRBench! To participate in the leaderboard, please follow the directory structure and schema below to format your model predictions.
25
+
26
  ## Public Inputs (What you receive)
27
 
28
  You will be provided with public input JSON files. Each file is a list of objects containing historical data and the timestamps for which you need to predict.
 
104
 
105
  ### Required Fields:
106
 
107
+ - `id` (String): The unique identifier for the sample (must match the ID provided in public inputs).
108
+ - `Reasoning` (String): The text explanation generated by your model.
109
+ - `Prediction` (List of Lists): A 2D numerical array representing the forecast window. For single-channel datasets, use `[[value]]` per time step.