Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -568,6 +568,29 @@ ds = ds.filter(lambda x: x['cleaning_status'] != 'rejected') # filter out reject
|
|
568 |
|
569 |
## Dataset structure
|
570 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
571 |
### Data Instances
|
572 |
|
573 |
We accessed each of the fourteen original natural language benchmarks that we revised from their respective huggingface repositories, and each benchmark had its own per-instance data fields/columns. We have standardized these benchmarks by providing pre-constructed prompts for each dataset (under 'platinum_prompt'). Each prompt template automatically formats the relevant dataset columns into a consistent structure. You can use these standardized prompts directly, but we include the original dataset columns for those interested in their own prompting, or to seamlessly subtitute our revised benchmarks for the original versions.
|
|
|
568 |
|
569 |
## Dataset structure
|
570 |
|
571 |
+
### Dataset Subsets & Cleaning Statistics
|
572 |
+
|
573 |
+
Below we list each of the platinum benchmarks with the number of examples in each benchmark that we kept via consensus, revised, verified, or rejected. See "Data Fields" for a description of what each cleaning status means.
|
574 |
+
|
575 |
+
| | Included | | | | Excluded |
|
576 |
+
| ----- | ----- | ----- | ----- | ----- | ----- |
|
577 |
+
Dataset | **# Included** | Consensus | Revised | Verified | Rejected
|
578 |
+
SingleOp (Platinum) | **150** | 142 | 0 | 8 | 9
|
579 |
+
SingleEq (Platinum) | **100** | 87 | 0 | 13 | 9
|
580 |
+
MultiArith (Platinum) | **171** | 165 | 3 | 3 | 3
|
581 |
+
SVAMP (Platinum) | **268** | 222 | 3 | 43 | 32
|
582 |
+
GSM8K (Platinum) | **271** | 227 | 1 | 43 | 29
|
583 |
+
MMLU High‑School Math (Platinum) | **268** | 106 | 0 | 162 | 2
|
584 |
+
Logic. Ded. 3-Obj (Platinum) | **200** | 199 | 0 | 1 | 0
|
585 |
+
Object Counting (Platinum) | **190** | 58 | 0 | 132 | 10
|
586 |
+
Navigate (Platinum) | **200** | 134 | 0 | 66 | 0
|
587 |
+
TabFact (Platinum) | **173** | 58 | 3 | 112 | 27
|
588 |
+
HotPotQA (Platinum) | **183** | 48 | 89 | 46 | 67
|
589 |
+
SQUAD2.0 (Platinum) | **164** | 78 | 43 | 43 | 86
|
590 |
+
DROP (Platinum) | **209** | 30 | 177 | 2 | 41
|
591 |
+
Winograd WSC (Platinum) | **195** | 77 | 0 | 118 | 5
|
592 |
+
VQA (Platinum) | **242** | 0 | 242 | 0 | 358
|
593 |
+
|
594 |
### Data Instances
|
595 |
|
596 |
We accessed each of the fourteen original natural language benchmarks that we revised from their respective huggingface repositories, and each benchmark had its own per-instance data fields/columns. We have standardized these benchmarks by providing pre-constructed prompts for each dataset (under 'platinum_prompt'). Each prompt template automatically formats the relevant dataset columns into a consistent structure. You can use these standardized prompts directly, but we include the original dataset columns for those interested in their own prompting, or to seamlessly subtitute our revised benchmarks for the original versions.
|