Update README for github link
Browse files
README.md
CHANGED
|
@@ -1,204 +1,246 @@
|
|
| 1 |
-
---
|
| 2 |
-
datasets:
|
| 3 |
-
- dataset: tianyumyum/AOE
|
| 4 |
-
data_files:
|
| 5 |
-
- split: all
|
| 6 |
-
path: table_data/all_AOE_tables.jsonl
|
| 7 |
-
---
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
|
| 11 |
-
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
|
| 41 |
-
|
| 42 |
-
|
| 43 |
-
|
| 44 |
-
|
| 45 |
-
|
| 46 |
-
|
| 47 |
-
|
| 48 |
-
|
| 49 |
-
|
| 50 |
-
|
| 51 |
-
|
| 52 |
-
|
| 53 |
-
|
| 54 |
-
|
| 55 |
-
|
| 56 |
-
|
| 57 |
-
|
| 58 |
-
|
| 59 |
-
|
| 60 |
-
|
| 61 |
-
|
| 62 |
-
"
|
| 63 |
-
"
|
| 64 |
-
|
| 65 |
-
|
| 66 |
-
|
| 67 |
-
|
| 68 |
-
|
| 69 |
-
|
| 70 |
-
|
| 71 |
-
|
| 72 |
-
|
| 73 |
-
|
| 74 |
-
|
| 75 |
-
|
| 76 |
-
|
| 77 |
-
|
| 78 |
-
|
| 79 |
-
|
| 80 |
-
|
| 81 |
-
|
| 82 |
-
|
| 83 |
-
|
| 84 |
-
|
| 85 |
-
|
| 86 |
-
|
| 87 |
-
|
| 88 |
-
|
| 89 |
-
|
| 90 |
-
|
| 91 |
-
|
| 92 |
-
|
| 93 |
-
|
| 94 |
-
|
| 95 |
-
|
| 96 |
-
|
| 97 |
-
|
| 98 |
-
|
| 99 |
-
|
| 100 |
-
|
| 101 |
-
|
| 102 |
-
|
| 103 |
-
|
| 104 |
-
|
| 105 |
-
|
| 106 |
-
|
| 107 |
-
|
| 108 |
-
|
| 109 |
-
|
| 110 |
-
|
|
| 111 |
-
|
| 112 |
-
|
|
| 113 |
-
|
|
| 114 |
-
|
|
| 115 |
-
|
| 116 |
-
|
| 117 |
-
|
| 118 |
-
|
| 119 |
-
|
| 120 |
-
|
| 121 |
-
|
| 122 |
-
|
| 123 |
-
|
| 124 |
-
|
| 125 |
-
|
| 126 |
-
|
| 127 |
-
|
| 128 |
-
|
| 129 |
-
|
| 130 |
-
|
| 131 |
-
|
| 132 |
-
|
| 133 |
-
|
| 134 |
-
|
| 135 |
-
|
| 136 |
-
|
| 137 |
-
|
| 138 |
-
|
| 139 |
-
|
| 140 |
-
|
| 141 |
-
|
| 142 |
-
|
| 143 |
-
|
| 144 |
-
|
| 145 |
-
|
| 146 |
-
|
| 147 |
-
|
| 148 |
-
|
| 149 |
-
|
| 150 |
-
|
| 151 |
-
|
| 152 |
-
|
| 153 |
-
|
| 154 |
-
|
| 155 |
-
|
| 156 |
-
|
| 157 |
-
|
| 158 |
-
|
| 159 |
-
|
| 160 |
-
|
| 161 |
-
|
| 162 |
-
|
| 163 |
-
|
| 164 |
-
|
| 165 |
-
|
| 166 |
-
|
| 167 |
-
|
| 168 |
-
|
| 169 |
-
|
| 170 |
-
|
| 171 |
-
|
| 172 |
-
|
| 173 |
-
|
| 174 |
-
|
| 175 |
-
|
| 176 |
-
|
| 177 |
-
|
| 178 |
-
|
| 179 |
-
|
| 180 |
-
|
| 181 |
-
|
| 182 |
-
|
| 183 |
-
|
| 184 |
-
|
| 185 |
-
|
| 186 |
-
|
| 187 |
-
|
| 188 |
-
|
| 189 |
-
|
| 190 |
-
|
| 191 |
-
|
| 192 |
-
|
| 193 |
-
|
| 194 |
-
|
| 195 |
-
|
| 196 |
-
|
| 197 |
-
|
| 198 |
-
|
| 199 |
-
|
| 200 |
-
|
| 201 |
-
|
| 202 |
-
|
| 203 |
-
|
| 204 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
datasets:
|
| 3 |
+
- dataset: tianyumyum/AOE
|
| 4 |
+
data_files:
|
| 5 |
+
- split: all
|
| 6 |
+
path: table_data/all_AOE_tables.jsonl
|
| 7 |
+
---
|
| 8 |
+
|
| 9 |
+
# 🏆 AOE: Arranged and Organized Extraction Benchmark
|
| 10 |
+
|
| 11 |
+
**📚 For full reproducibility, all source code is available in our [GitHub repository](https://github.com/tianyumyum/AOE).**
|
| 12 |
+
|
| 13 |
+
> **🎯 Challenge**: Can AI models construct structured tables from complex, real-world documents? AOE tests this critical capability across legal, financial, and academic domains.
|
| 14 |
+
|
| 15 |
+
## 🚀 What is AOE?
|
| 16 |
+
|
| 17 |
+
The **AOE (Arranged and Organized Extraction) Benchmark** addresses a critical gap in existing text-to-table evaluation frameworks. Unlike synthetic benchmarks, AOE challenges modern LLMs with **authentic, complex, and practically relevant** data extraction tasks.
|
| 18 |
+
|
| 19 |
+
> 💥 **Why "AOE"?** Like Area of Effect damage in gaming that impacts everything within range, our benchmark reveals that current AI models struggle across *all* aspects of structured extraction - from basic parsing to complex reasoning. No model escapes unscathed!
|
| 20 |
+
|
| 21 |
+
### 🎯 Core Innovation
|
| 22 |
+
|
| 23 |
+
**Beyond Isolated Information**: AOE doesn't just test information retrieval—it evaluates models' ability to:
|
| 24 |
+
- 🧠 **Understand** complex task requirements and construct appropriate schemas
|
| 25 |
+
- 🔍 **Locate** scattered information across multiple lengthy documents
|
| 26 |
+
- 🏗️ **Integrate** diverse data points into coherent, structured tables
|
| 27 |
+
- 🧮 **Perform** numerical reasoning and cross-document analysis
|
| 28 |
+
|
| 29 |
+
### 📊 Key Statistics
|
| 30 |
+
|
| 31 |
+
| Metric | Value |
|
| 32 |
+
|--------|-------|
|
| 33 |
+
| **Total Tasks** | 373 benchmark instances |
|
| 34 |
+
| **Domains** | 3 (Legal, Financial, Academic) |
|
| 35 |
+
| **Document Sources** | 100% real-world, authentic content |
|
| 36 |
+
| **Total Documents** | 1,914 source documents |
|
| 37 |
+
| **Languages** | English & Chinese |
|
| 38 |
+
|
| 39 |
+
#### 📈 Detailed Domain Statistics
|
| 40 |
+
|
| 41 |
+
| Domain | Language | Tables | Documents | Avg Tokens | Docs/Table |
|
| 42 |
+
|--------|----------|--------|-----------|------------|------------|
|
| 43 |
+
| **Academic** | EN | 74 | 257 | 69k | 3.5/5 |
|
| 44 |
+
| **Financial** | ZH,EN | 224 | 944 | 437k | 4.2/5 |
|
| 45 |
+
| **Legal** | ZH | 75 | 713 | 7k | 9.6/13 |
|
| 46 |
+
|
| 47 |
+
|
| 48 |
+
|
| 49 |
+
## 📁 Dataset Structure
|
| 50 |
+
|
| 51 |
+
```python
|
| 52 |
+
{
|
| 53 |
+
"record_id": "academic_10_0_en",
|
| 54 |
+
"query": "Identify possible citation relationships among the following articles...",
|
| 55 |
+
"doc_length": {
|
| 56 |
+
"paper_1.md": 141566, # Character count per document
|
| 57 |
+
"paper_2.md": 885505,
|
| 58 |
+
"paper_3.md": 48869,
|
| 59 |
+
"paper_4.md": 65430,
|
| 60 |
+
"paper_5.md": 53987
|
| 61 |
+
},
|
| 62 |
+
"table_schema": { # Dynamic schema definition
|
| 63 |
+
"columns": [
|
| 64 |
+
{"name": "Cited paper title", "about": "the name of the paper"},
|
| 65 |
+
{"name": "Referencing paper title", "about": "Referencing paper title"},
|
| 66 |
+
{"name": "Referenced content", "about": "the context of the cited paper"},
|
| 67 |
+
{"name": "Label", "about": "reference type: background/methodology/additional"}
|
| 68 |
+
]
|
| 69 |
+
},
|
| 70 |
+
"answers": [ # Ground truth structured output
|
| 71 |
+
{
|
| 72 |
+
"Cited paper title": "Large Language Model Is Not a Good Few-shot Information Extractor...",
|
| 73 |
+
"Referencing paper title": "What Makes Good In-Context Examples for GPT-3?",
|
| 74 |
+
"Referenced content": "(2) Sentence-embedding (Liu et al., 2022; Su et al., 2022): retrieving...",
|
| 75 |
+
"Label": "background"
|
| 76 |
+
}
|
| 77 |
+
]
|
| 78 |
+
}
|
| 79 |
+
```
|
| 80 |
+
|
| 81 |
+
|
| 82 |
+
## 🏭 Data Sources & Domains
|
| 83 |
+
|
| 84 |
+
<div align="center">
|
| 85 |
+
<img src="fig_data_process-0516-v4.jpg" alt="AOE Benchmark Construction Process" width="800">
|
| 86 |
+
<p><em>Figure: AOE benchmark construction pipeline from raw documents to structured evaluation tasks</em></p>
|
| 87 |
+
</div>
|
| 88 |
+
|
| 89 |
+
### 📚 **Academic Domain**
|
| 90 |
+
- **Sources**: Semantic Scholar, Papers With Code
|
| 91 |
+
- **Content**: Research papers, citation networks, performance leaderboards
|
| 92 |
+
- **Tasks**: Citation relationship extraction, methodology performance analysis
|
| 93 |
+
|
| 94 |
+
### 💰 **Financial Domain**
|
| 95 |
+
- **Source**: CNINFO (China's official financial disclosure platform)
|
| 96 |
+
- **Content**: Annual reports (2020-2023) from A-share listed companies
|
| 97 |
+
- **Tasks**: Longitudinal financial analysis, cross-company comparisons
|
| 98 |
+
|
| 99 |
+
### ⚖️ **Legal Domain**
|
| 100 |
+
- **Sources**: People's Court Case Library, National Legal Database
|
| 101 |
+
- **Content**: Chinese civil law judgments, official statutes
|
| 102 |
+
- **Tasks**: Legal provision retrieval, defendant verdict extraction
|
| 103 |
+
|
| 104 |
+
|
| 105 |
+
|
| 106 |
+
## 🎯 Benchmark Tasks Overview
|
| 107 |
+
|
| 108 |
+
### 📊 Task Categories
|
| 109 |
+
|
| 110 |
+
| Domain | Task ID | Description | Challenge Level |
|
| 111 |
+
|--------|---------|-------------|-----------------|
|
| 112 |
+
| **Academic** | $Aca_0$ | Citation Context Extraction | 🔥🔥🔥 |
|
| 113 |
+
| | $Aca_1$ | Methodology Performance Extraction | 🔥🔥 |
|
| 114 |
+
| **Legal** | $Legal_0$ | Legal Provision Retrieval | 🔥🔥🔥🔥 |
|
| 115 |
+
| | $Legal_1$ | Defendant Verdict Extraction | 🔥🔥🔥 |
|
| 116 |
+
| **Financial** | $Fin_{0-3}$ | Single Company Longitudinal Analysis | 🔥🔥 |
|
| 117 |
+
| | $Fin_{4-6}$ | Multi-Company Comparative Analysis | 🔥🔥🔥 |
|
| 118 |
+
|
| 119 |
+
### 🏗️ Data Processing Pipeline
|
| 120 |
+
|
| 121 |
+
- **📄 Document Preservation**: Advanced parsing with `markitdown`, `Marker`, and OCR
|
| 122 |
+
- **🏷️ Human-in-the-Loop**: Expert annotation for legal document processing
|
| 123 |
+
- **✅ Quality Assurance**: Multi-stage validation ensuring accuracy and completeness
|
| 124 |
+
|
| 125 |
+
## 💡 Example Tasks
|
| 126 |
+
|
| 127 |
+
### ⚖️ Legal Analysis Example
|
| 128 |
+
**Task**: Extract structured verdict information from complex trademark infringement cases
|
| 129 |
+
|
| 130 |
+
<details>
|
| 131 |
+
<summary><strong>📋 View Ground Truth Table</strong></summary>
|
| 132 |
+
|
| 133 |
+
**Input Query**: "作为法律文本分析专家,请按照指定格式从判决信息中准确提取每位被告的最终判决结果"
|
| 134 |
+
|
| 135 |
+
**Source Documents**:complex legal cases (678-2391 tokens each)
|
| 136 |
+
|
| 137 |
+
```csv
|
| 138 |
+
案件名,被告,罪名,刑期,缓刑,处罚金,其他判决
|
| 139 |
+
刘某假冒注册商标案,刘某,假冒注册商标罪,有期徒刑四年,,处罚金人民币十五万元,扣押车辆、手机等变价抵作罚金
|
| 140 |
+
欧某辉、张某妹假冒注册商标案,欧某辉,假冒注册商标罪,有期徒刑五年六个月,,处罚金人民币六十五万元,追缴违法所得100.6583万元
|
| 141 |
+
谢某某甲等假冒注册商标案,谢某某甲,无罪,,,,
|
| 142 |
+
马某华等假冒注册商标案,马某华,假冒注册商标罪,有期徒刑六年,,处罚金人民币六百八十万元,
|
| 143 |
+
……
|
| 144 |
+
```
|
| 145 |
+
|
| 146 |
+
**Challenge**: Models must parse complex legal language from multiple case documents (avg 9.6 docs per table), handle joint defendant cases with up to 16 defendants per case, distinguish between different verdict outcomes (guilty vs. acquitted), and extract structured information from unstructured legal narratives involving trademark infringement worth millions.
|
| 147 |
+
|
| 148 |
+
</details>
|
| 149 |
+
|
| 150 |
+
|
| 151 |
+
### 📚 Academic Analysis Example
|
| 152 |
+
**Task**: Extract methodology performance from research papers on WikiText-103 dataset
|
| 153 |
+
|
| 154 |
+
<details>
|
| 155 |
+
<summary><strong>📊 View Ground Truth Table</strong></summary>
|
| 156 |
+
|
| 157 |
+
**Input Query**: "List the Test perplexity performance of the proposed methods in the paper on the WikiText-103 dataset."
|
| 158 |
+
|
| 159 |
+
**Source Documents**: research papers (36k-96k tokens each)
|
| 160 |
+
|
| 161 |
+
```csv
|
| 162 |
+
paper_name,method,result,models_and_settings
|
| 163 |
+
Primal-Attention: Self-attention through Asymmetric Kernel SVD,Primal.+Trans.,31,
|
| 164 |
+
Language Modeling with Gated Convolutional Networks,GCNN-8,44.9,
|
| 165 |
+
GATELOOP: FULLY DATA-CONTROLLED LINEAR RECURRENCE,GateLoop,13.4,
|
| 166 |
+
```
|
| 167 |
+
|
| 168 |
+
**Challenge**: Models must parse complex academic papers, identify specific methodologies, locate performance tables, and extract numerical results while handling various formatting styles.
|
| 169 |
+
|
| 170 |
+
</details>
|
| 171 |
+
|
| 172 |
+
### 🏦 Financial Analysis Example
|
| 173 |
+
**Task**: Extract and compare financial metrics across multiple company annual reports
|
| 174 |
+
|
| 175 |
+
<details>
|
| 176 |
+
<summary><strong>📊 View Ground Truth Table</strong></summary>
|
| 177 |
+
|
| 178 |
+
```csv
|
| 179 |
+
Company,Revenue (CNY),Net Profit (CNY),Operating Cash Flow (CNY)
|
| 180 |
+
Gree Electric,203979266387,29017387604,56398426354
|
| 181 |
+
Midea Group,372037280000,33719935000,57902611000
|
| 182 |
+
Haier Smart Home,261427783050,16596615046,25262376228
|
| 183 |
+
TCL Technology,174366657015,4781000000,25314756105
|
| 184 |
+
GONGNIU GROUP,15694755600,3870135376,4827282090
|
| 185 |
+
```
|
| 186 |
+
|
| 187 |
+
**Challenge**: Models must locate financial data scattered across lengthy annual reports (avg 437k tokens), handle different formatting conventions, and ensure numerical accuracy across multiple documents.
|
| 188 |
+
|
| 189 |
+
</details>
|
| 190 |
+
|
| 191 |
+
|
| 192 |
+
|
| 193 |
+
## 🔬 Research Applications
|
| 194 |
+
|
| 195 |
+
### 🎯 Ideal for Evaluating:
|
| 196 |
+
- **Multi-document Understanding**: Information synthesis across long-form texts
|
| 197 |
+
- **Schema Construction**: Dynamic table structure generation
|
| 198 |
+
- **Domain Adaptation**: Performance across specialized fields
|
| 199 |
+
- **Numerical Reasoning**: Financial calculations and quantitative analysis
|
| 200 |
+
- **Cross-lingual Capabilities**: English and Chinese document processing
|
| 201 |
+
|
| 202 |
+
### 📈 Benchmark Insights:
|
| 203 |
+
- **Even SOTA models struggle**: Best performers achieve only ~68% accuracy
|
| 204 |
+
- **Domain specificity matters**: Performance varies significantly across fields
|
| 205 |
+
- **Length matters**: Document complexity correlates with task difficulty
|
| 206 |
+
- **RAG limitations revealed**: Standard retrieval often fails for structured tasks
|
| 207 |
+
|
| 208 |
+
|
| 209 |
+
## 🚀 Getting Started
|
| 210 |
+
|
| 211 |
+
### Quick Usage
|
| 212 |
+
```python
|
| 213 |
+
from datasets import load_dataset
|
| 214 |
+
|
| 215 |
+
# Load the complete benchmark
|
| 216 |
+
dataset = load_dataset("tianyumyum/AOE")
|
| 217 |
+
|
| 218 |
+
# Access specific splits
|
| 219 |
+
all_tasks = dataset["all"]
|
| 220 |
+
|
| 221 |
+
# Example task
|
| 222 |
+
task = all_tasks[0]
|
| 223 |
+
print(f"Documents: {len(task['doc_length'])}")
|
| 224 |
+
print(f"Expected output: {task['answers']}")
|
| 225 |
+
```
|
| 226 |
+
|
| 227 |
+
### 📊 Evaluation Framework
|
| 228 |
+
AOE provides a comprehensive 3-tier evaluation system:
|
| 229 |
+
1. **🎯 CSV Parsability**: Basic structure compliance (Pass Rate)
|
| 230 |
+
2. **🏆 Overall Quality**: LLM-assessed holistic evaluation (0-100%)
|
| 231 |
+
3. **🔬 Cell-Level Accuracy**: Granular content precision (F1-Score)
|
| 232 |
+
|
| 233 |
+
|
| 234 |
+
|
| 235 |
+
## 🤝 Contributing & Support
|
| 236 |
+
|
| 237 |
+
- 🐛 **Issues**: [GitHub Issues](https://github.com/tianyumyum/AOE/issues)
|
| 238 |
+
- 💬 **Discussions**: [GitHub Discussions](https://github.com/tianyumyum/AOE/discussions)
|
| 239 |
+
|
| 240 |
+
<div align="center">
|
| 241 |
+
|
| 242 |
+
**⭐ Star our [GitHub repo](https://github.com/tianyumyum/AOE) if you find AOE useful! ⭐**
|
| 243 |
+
|
| 244 |
+
*Pushing the boundaries of structured knowledge extraction* 🚀
|
| 245 |
+
|
| 246 |
+
</div>
|