Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
License:
rbiswasfc commited on
Commit
4f54fa9
·
verified ·
1 Parent(s): e35f3fa

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +141 -31
README.md CHANGED
@@ -1,31 +1,141 @@
1
- ---
2
- license: mit
3
- dataset_info:
4
- features:
5
- - name: id
6
- dtype: string
7
- - name: question
8
- dtype: string
9
- - name: context
10
- dtype: string
11
- - name: choices
12
- sequence: string
13
- - name: label
14
- dtype: int64
15
- splits:
16
- - name: train
17
- num_bytes: 63920351
18
- num_examples: 2523
19
- - name: validation
20
- num_bytes: 52064930
21
- num_examples: 2086
22
- download_size: 5955070
23
- dataset_size: 115985281
24
- configs:
25
- - config_name: default
26
- data_files:
27
- - split: train
28
- path: data/train-*
29
- - split: validation
30
- path: data/validation-*
31
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ dataset_info:
4
+ features:
5
+ - name: id
6
+ dtype: string
7
+ - name: question
8
+ dtype: string
9
+ - name: context
10
+ dtype: string
11
+ - name: choices
12
+ sequence: string
13
+ - name: label
14
+ dtype: int64
15
+ splits:
16
+ - name: train
17
+ num_bytes: 63920351
18
+ num_examples: 2523
19
+ - name: validation
20
+ num_bytes: 52064930
21
+ num_examples: 2086
22
+ download_size: 5955070
23
+ dataset_size: 115985281
24
+ configs:
25
+ - config_name: default
26
+ data_files:
27
+ - split: train
28
+ path: data/train-*
29
+ - split: validation
30
+ path: data/validation-*
31
+ ---
32
+
33
+
34
+ This dataset is derived from `tau/scrolls` [dataset](tau/scrolls) by running the following script:
35
+
36
+ ```python
37
+ import re
38
+
39
+ from datasets import load_dataset
40
+
41
+
42
+ def _normalize_answer(text):
43
+ return " ".join(text.split()).strip()
44
+
45
+
46
+ def _drop_duplicates_in_input(untokenized_dataset):
47
+ # from scrolls/evaluator/dataset_evaluator.py
48
+
49
+ indices_to_keep = []
50
+ id_to_idx = {}
51
+ outputs = []
52
+ for i, (id_, output) in enumerate(
53
+ zip(untokenized_dataset["id"], untokenized_dataset["output"])
54
+ ):
55
+ if id_ in id_to_idx:
56
+ outputs[id_to_idx[id_]].append(output)
57
+ continue
58
+ indices_to_keep.append(i)
59
+ id_to_idx[id_] = len(outputs)
60
+ outputs.append([output])
61
+ untokenized_dataset = untokenized_dataset.select(indices_to_keep).flatten_indices()
62
+ untokenized_dataset = untokenized_dataset.remove_columns("output")
63
+ untokenized_dataset = untokenized_dataset.add_column("outputs", outputs)
64
+ return untokenized_dataset
65
+
66
+
67
+ def _process_doc_prepended_question(doc):
68
+ input = doc["input"]
69
+ split = input.find("\n\n")
70
+ return {
71
+ "id": doc["id"],
72
+ "pid": doc["pid"],
73
+ "input": input,
74
+ "outputs": doc["outputs"],
75
+ "question": input[0:split],
76
+ "text": input[split + 2 :],
77
+ }
78
+
79
+
80
+ def process_doc(doc):
81
+ quality_multiple_choice_pattern = re.compile(r" *\([A-D]\) *")
82
+ doc = _process_doc_prepended_question(doc)
83
+
84
+ split = doc["text"].find("\n\n", doc["text"].find("(D)"))
85
+ choices_text = doc["text"][:split]
86
+
87
+ doc["text"] = doc["text"][split:].strip()
88
+ doc["choices"] = [
89
+ _normalize_answer(choice)
90
+ for choice in re.split(quality_multiple_choice_pattern, choices_text)[1:]
91
+ ]
92
+ doc["gold"] = doc["choices"].index(_normalize_answer(doc["outputs"][0]))
93
+ return doc
94
+
95
+
96
+ def get_quality_dataset():
97
+ """
98
+ download and processes the quality dataset following the lm-evaluation-harness scrolls_quality task
99
+
100
+ The processed dataset has the following train & validation splits with 2523 & 2086 examples respectively.
101
+ fields to be used during evaluation:
102
+ - question: the question prompt
103
+ - text: the context
104
+ - choices: list of choices (4 in total)
105
+ - gold: index of the correct choice
106
+ """
107
+ quality_dataset = load_dataset("tau/scrolls", "quality")
108
+ del quality_dataset["test"] # drop test split -> no ground truths
109
+ for split in quality_dataset:
110
+ quality_dataset[split] = _drop_duplicates_in_input(quality_dataset[split])
111
+ quality_dataset = quality_dataset.map(process_doc)
112
+ return quality_dataset
113
+
114
+
115
+ quality_dataset = get_quality_dataset()
116
+ quality_dataset = quality_dataset.rename_columns({"text": "context", "gold": "label"})
117
+ quality_dataset = quality_dataset.remove_columns(["pid", "input", "outputs"])
118
+ train_ds = quality_dataset["train"]
119
+ validation_ds = quality_dataset["validation"]
120
+ ```
121
+
122
+ The processing code is adapted from [lm-evaluation-harness scrolls task](https://github.com/EleutherAI/lm-evaluation-harness/blob/main/lm_eval/tasks/scrolls/task.py)
123
+
124
+ ---
125
+ Relevant sections from the [SCROLLS: Standardized CompaRison Over Long Language Sequences paper](https://arxiv.org/pdf/2201.03533)
126
+ ```
127
+ QuALITY (Pang et al., 2021): A multiplechoice question answering dataset over stories
128
+ and articles sourced from Project Gutenberg,10 the
129
+ Open American National Corpus (Fillmore et al.,
130
+ 1998; Ide and Suderman, 2004), and more. Experienced writers wrote questions and distractors, and
131
+ were incentivized to write answerable, unambiguous questions such that in order to correctly answer
132
+ them, human annotators must read large portions
133
+ of the given document. To measure the difficulty
134
+ of their questions, Pang et al. conducted a speed
135
+ validation process, where another set of annotators
136
+ were asked to answer questions given only a short
137
+ period of time to skim through the document. As
138
+ a result, 50% of the questions in QuALITY are
139
+ labeled as hard, i.e. the majority of the annotators in the speed validation setting chose the wrong
140
+ answer.
141
+ ```