Update README.md
Browse files
README.md
CHANGED
|
@@ -12,10 +12,27 @@ tags:
|
|
| 12 |
- document-understanding
|
| 13 |
- benchmark
|
| 14 |
pretty_name: AIDABench
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 15 |
---
|
| 16 |
|
| 17 |
# Dataset Card for AIDABench
|
| 18 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 19 |
## Dataset Summary
|
| 20 |
|
| 21 |
**AIDABench** is a benchmark for evaluating AI systems on **end-to-end data analytics over real-world documents**. It contains **600+** diverse analytical tasks grounded in realistic scenarios and spans heterogeneous data sources such as **spreadsheets, databases, financial reports, and operational records**. Tasks are designed to be challenging, often requiring multi-step reasoning and tool use to complete reliably.
|
|
@@ -94,9 +111,6 @@ AIDABench is intended for:
|
|
| 94 |
- The benchmark is designed for tool-augmented settings; purely text-only inference may underperform due to the need for code execution and file manipulation.
|
| 95 |
- Automated evaluation relies on LLM judges, which introduces additional compute cost and (small) scoring variance depending on settings.
|
| 96 |
|
| 97 |
-
## Links
|
| 98 |
-
|
| 99 |
-
- **GitHub Repository**: [https://github.com/MichaelYang-lyx/AIDABench](https://github.com/MichaelYang-lyx/AIDABench)
|
| 100 |
|
| 101 |
## Citation
|
| 102 |
|
|
|
|
| 12 |
- document-understanding
|
| 13 |
- benchmark
|
| 14 |
pretty_name: AIDABench
|
| 15 |
+
configs:
|
| 16 |
+
- config_name: qa
|
| 17 |
+
data_files:
|
| 18 |
+
- split: test
|
| 19 |
+
path: QA/QA.jsonl
|
| 20 |
+
- config_name: data_visualization
|
| 21 |
+
data_files:
|
| 22 |
+
- split: test
|
| 23 |
+
path: data_visualization/data_visualization.jsonl
|
| 24 |
+
- config_name: file_generation
|
| 25 |
+
data_files:
|
| 26 |
+
- split: test
|
| 27 |
+
path: file_generation/file_generation.jsonl
|
| 28 |
---
|
| 29 |
|
| 30 |
# Dataset Card for AIDABench
|
| 31 |
|
| 32 |
+
## Links
|
| 33 |
+
- [Paper (arXiv)](https://arxiv.org/abs/2603.15636)
|
| 34 |
+
- [GitHub Repository](https://github.com/MichaelYang-lyx/AIDABench)
|
| 35 |
+
|
| 36 |
## Dataset Summary
|
| 37 |
|
| 38 |
**AIDABench** is a benchmark for evaluating AI systems on **end-to-end data analytics over real-world documents**. It contains **600+** diverse analytical tasks grounded in realistic scenarios and spans heterogeneous data sources such as **spreadsheets, databases, financial reports, and operational records**. Tasks are designed to be challenging, often requiring multi-step reasoning and tool use to complete reliably.
|
|
|
|
| 111 |
- The benchmark is designed for tool-augmented settings; purely text-only inference may underperform due to the need for code execution and file manipulation.
|
| 112 |
- Automated evaluation relies on LLM judges, which introduces additional compute cost and (small) scoring variance depending on settings.
|
| 113 |
|
|
|
|
|
|
|
|
|
|
| 114 |
|
| 115 |
## Citation
|
| 116 |
|