parquet-converter commited on
Commit
12f657f
1 Parent(s): d8cab99

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,27 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bin.* filter=lfs diff=lfs merge=lfs -text
5
- *.bz2 filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
- *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.model filter=lfs diff=lfs merge=lfs -text
12
- *.msgpack filter=lfs diff=lfs merge=lfs -text
13
- *.onnx filter=lfs diff=lfs merge=lfs -text
14
- *.ot filter=lfs diff=lfs merge=lfs -text
15
- *.parquet filter=lfs diff=lfs merge=lfs -text
16
- *.pb filter=lfs diff=lfs merge=lfs -text
17
- *.pt filter=lfs diff=lfs merge=lfs -text
18
- *.pth filter=lfs diff=lfs merge=lfs -text
19
- *.rar filter=lfs diff=lfs merge=lfs -text
20
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
- *.tar.* filter=lfs diff=lfs merge=lfs -text
22
- *.tflite filter=lfs diff=lfs merge=lfs -text
23
- *.tgz filter=lfs diff=lfs merge=lfs -text
24
- *.xz filter=lfs diff=lfs merge=lfs -text
25
- *.zip filter=lfs diff=lfs merge=lfs -text
26
- *.zstandard filter=lfs diff=lfs merge=lfs -text
27
- *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,237 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - crowdsourced
4
- language:
5
- - en
6
- language_creators:
7
- - found
8
- license:
9
- - other
10
- multilinguality:
11
- - monolingual
12
- pretty_name: Eraser MultiRC (Multi-Sentence Reading Comprehension)
13
- size_categories:
14
- - 10K<n<100K
15
- source_datasets:
16
- - original
17
- task_categories:
18
- - multiple-choice
19
- task_ids:
20
- - multiple-choice-qa
21
- paperswithcode_id: null
22
- dataset_info:
23
- features:
24
- - name: passage
25
- dtype: string
26
- - name: query_and_answer
27
- dtype: string
28
- - name: label
29
- dtype:
30
- class_label:
31
- names:
32
- 0: 'False'
33
- 1: 'True'
34
- - name: evidences
35
- sequence: string
36
- splits:
37
- - name: test
38
- num_bytes: 9194475
39
- num_examples: 4848
40
- - name: train
41
- num_bytes: 47922877
42
- num_examples: 24029
43
- - name: validation
44
- num_bytes: 6529020
45
- num_examples: 3214
46
- download_size: 1667550
47
- dataset_size: 63646372
48
- ---
49
-
50
- # Dataset Card for "eraser_multi_rc"
51
-
52
- ## Table of Contents
53
- - [Dataset Description](#dataset-description)
54
- - [Dataset Summary](#dataset-summary)
55
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
56
- - [Languages](#languages)
57
- - [Dataset Structure](#dataset-structure)
58
- - [Data Instances](#data-instances)
59
- - [Data Fields](#data-fields)
60
- - [Data Splits](#data-splits)
61
- - [Dataset Creation](#dataset-creation)
62
- - [Curation Rationale](#curation-rationale)
63
- - [Source Data](#source-data)
64
- - [Annotations](#annotations)
65
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
66
- - [Considerations for Using the Data](#considerations-for-using-the-data)
67
- - [Social Impact of Dataset](#social-impact-of-dataset)
68
- - [Discussion of Biases](#discussion-of-biases)
69
- - [Other Known Limitations](#other-known-limitations)
70
- - [Additional Information](#additional-information)
71
- - [Dataset Curators](#dataset-curators)
72
- - [Licensing Information](#licensing-information)
73
- - [Citation Information](#citation-information)
74
- - [Contributions](#contributions)
75
-
76
- ## Dataset Description
77
-
78
- - **Homepage:** http://cogcomp.org/multirc/
79
- - **Repository:** https://github.com/CogComp/multirc
80
- - **Paper:** [Looking Beyond the Surface: A Challenge Set for Reading Comprehension over Multiple Sentences](https://cogcomp.seas.upenn.edu/page/publication_view/833)
81
- - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
82
- - **Size of downloaded dataset files:** 1.59 MB
83
- - **Size of the generated dataset:** 60.70 MB
84
- - **Total amount of disk used:** 62.29 MB
85
-
86
- ### Dataset Summary
87
-
88
- MultiRC (Multi-Sentence Reading Comprehension) is a dataset of short paragraphs and multi-sentence questions that can be answered from the content of the paragraph.
89
-
90
- We have designed the dataset with three key challenges in mind:
91
- - The number of correct answer-options for each question is not pre-specified. This removes the over-reliance of current approaches on answer-options and forces them to decide on the correctness of each candidate answer independently of others. In other words, unlike previous work, the task here is not to simply identify the best answer-option, but to evaluate the correctness of each answer-option individually.
92
- - The correct answer(s) is not required to be a span in the text.
93
- - The paragraphs in our dataset have diverse provenance by being extracted from 7 different domains such as news, fiction, historical text etc., and hence are expected to be more diverse in their contents as compared to single-domain datasets.
94
-
95
- The goal of this dataset is to encourage the research community to explore approaches that can do more than sophisticated lexical-level matching.
96
-
97
- ### Supported Tasks and Leaderboards
98
-
99
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
100
-
101
- ### Languages
102
-
103
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
104
-
105
- ## Dataset Structure
106
-
107
- ### Data Instances
108
-
109
- #### default
110
-
111
- - **Size of downloaded dataset files:** 1.59 MB
112
- - **Size of the generated dataset:** 60.70 MB
113
- - **Total amount of disk used:** 62.29 MB
114
-
115
- An example of 'validation' looks as follows.
116
- ```
117
- This example was too long and was cropped:
118
-
119
- {
120
- "evidences": "[\"Allan sat down at his desk and pulled the chair in close .\", \"Opening a side drawer , he took out a piece of paper and his ink...",
121
- "label": 0,
122
- "passage": "\"Allan sat down at his desk and pulled the chair in close .\\nOpening a side drawer , he took out a piece of paper and his inkpot...",
123
- "query_and_answer": "Name few objects said to be in or on Allan 's desk || Eraser"
124
- }
125
- ```
126
-
127
- ### Data Fields
128
-
129
- The data fields are the same among all splits.
130
-
131
- #### default
132
- - `passage`: a `string` feature.
133
- - `query_and_answer`: a `string` feature.
134
- - `label`: a classification label, with possible values including `False` (0), `True` (1).
135
- - `evidences`: a `list` of `string` features.
136
-
137
- ### Data Splits
138
-
139
- | name |train|validation|test|
140
- |-------|----:|---------:|---:|
141
- |default|24029| 3214|4848|
142
-
143
- ## Dataset Creation
144
-
145
- ### Curation Rationale
146
-
147
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
148
-
149
- ### Source Data
150
-
151
- #### Initial Data Collection and Normalization
152
-
153
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
154
-
155
- #### Who are the source language producers?
156
-
157
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
158
-
159
- ### Annotations
160
-
161
- #### Annotation process
162
-
163
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
164
-
165
- #### Who are the annotators?
166
-
167
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
168
-
169
- ### Personal and Sensitive Information
170
-
171
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
172
-
173
- ## Considerations for Using the Data
174
-
175
- ### Social Impact of Dataset
176
-
177
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
178
-
179
- ### Discussion of Biases
180
-
181
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
182
-
183
- ### Other Known Limitations
184
-
185
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
186
-
187
- ## Additional Information
188
-
189
- ### Dataset Curators
190
-
191
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
192
-
193
- ### Licensing Information
194
-
195
- https://github.com/CogComp/multirc/blob/master/LICENSE
196
-
197
- Research and Academic Use License
198
- Cognitive Computation Group
199
- University of Illinois at Urbana-Champaign
200
-
201
- Downloading software implies that you accept the following license terms:
202
-
203
- Under this Agreement, The Board of Trustees of the University of Illinois ("University"), a body corporate and politic of the State of Illinois with its principal offices at 506 South Wright Street, Urbana, Illinois 61801, U.S.A., on behalf of its Department of Computer Science on the Urbana-Champaign Campus, provides the software ("Software") described in Appendix A, attached hereto and incorporated herein, to the Licensee identified below ("Licensee") subject to the following conditions:
204
-
205
- 1. Upon execution of this Agreement by Licensee below, the University grants, and Licensee accepts, a roylaty-free, non-exclusive license:
206
- A. To use unlimited copies of the Software for its own academic and research purposes.
207
- B. To make derivative works. However, if Licensee distributes any derivative work based on or derived from the Software (with such distribution limited to binary form only), then Licensee will (1) notify the University (c/o Professor Dan Roth, e-mail: danr@cs.uiuc.edu) regarding its distribution of the derivative work and provide a copy if requested, and (2) clearly notify users that such derivative work is a modified version and not the original Software distributed by the University.
208
- C. To redistribute (sublicense) derivative works based on the Software in binary form only to third parties provided that (1) the copyright notice and any accompanying legends or proprietary notices are reproduced on all copies, (2) no royalty is charged for such copies, and (3) third parties are restricted to using the derivative work for academic and research purposes only, without further sublicensing rights.
209
- No license is granted herein that would permit Licensee to incorporate the Software into a commercial product, or to otherwise commercially exploit the Software. Should Licensee wish to make commercial use of the Software, Licensee should contact the University, c/o the Office of Technology Management ("OTM") to negotiate an appropriate license for such commercial use. To contact the OTM: otmmailaccount@ad.uiuc.edu; telephone: (217)333-3781; fax: (217) 265-5530.
210
- 2. THE UNIVERSITY GIVES NO WARRANTIES, EITHER EXPRESSED OR IMPLIED, FOR THE SOFTWARE AND/OR ASSOCIATED MATERIALS PROVIDED UNDER THIS AGREEMENT, INCLUDING, WITHOUT LIMITATION, WARRANTY OF MERCHANTABILITY AND WARRANTY OF FITNESS FOR A PARTICULAR PURPOSE, AND ANY WARRANTY AGAINST INFRINGEMENT OF ANY INTELLECTUAL PROPERTY RIGHTS.
211
- 3. Licensee understands the Software is a research tool for which no warranties as to capabilities or accuracy are made, and Licensee accepts the Software on an "as is, with all defects" basis, without maintenance, debugging , support or improvement. Licensee assumes the entire risk as to the results and performance of the Software and/or associated materials. Licensee agrees that University shall not be held liable for any direct, indirect, consequential, or incidental damages with respect to any claim by Licensee or any third party on account of or arising from this Agreement or use of the Software and/or associated materials.
212
- 4. Licensee understands the Software is proprietary to the University. Licensee will take all reasonable steps to insure that the source code is protected and secured from unauthorized disclosure, use, or release and will treat it with at least the same level of care as Licensee would use to protect and secure its own proprietary computer programs and/or information, but using no less than reasonable care.
213
- 5. In the event that Licensee shall be in default in the performance of any material obligations under this Agreement, and if the default has not been remedied within sixty (60) days after the date of notice in writing of such default, University may terminate this Agreement by written notice. In the event of termination, Licensee shall promptly return to University the original and any copies of licensed Software in Licensee's possession. In the event of any termination of this Agreement, any and all sublicenses granted by Licensee to third parties pursuant to this Agreement (as permitted by this Agreement) prior to the date of such termination shall nevertheless remain in full force and effect.
214
- 6. The Software was developed, in part, with support from the National Science Foundation, and the Federal Government has certain license rights in the Software.
215
- 7. This Agreement shall be construed and interpreted in accordance with the laws of the State of Illinois, U.S.A..
216
- 8. This Agreement shall be subject to all United States Government laws and regulations now and hereafter applicable to the subject matter of this Agreement, including specifically the Export Law provisions of the Departments of Commerce and State. Licensee will not export or re-export the Software without the appropriate United States or foreign government license.
217
-
218
- By its registration below, Licensee confirms that it understands the terms and conditions of this Agreement, and agrees to be bound by them. This Agreement shall become effective as of the date of execution by Licensee.
219
-
220
- ### Citation Information
221
-
222
- ```
223
- @unpublished{eraser2019,
224
- title = {ERASER: A Benchmark to Evaluate Rationalized NLP Models},
225
- author = {Jay DeYoung and Sarthak Jain and Nazneen Fatema Rajani and Eric Lehman and Caiming Xiong and Richard Socher and Byron C. Wallace}
226
- }
227
- @inproceedings{MultiRC2018,
228
- author = {Daniel Khashabi and Snigdha Chaturvedi and Michael Roth and Shyam Upadhyay and Dan Roth},
229
- title = {Looking Beyond the Surface:A Challenge Set for Reading Comprehension over Multiple Sentences},
230
- booktitle = {Proceedings of North American Chapter of the Association for Computational Linguistics (NAACL)},
231
- year = {2018}
232
- }
233
- ```
234
-
235
- ### Contributions
236
-
237
- Thanks to [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"default": {"description": "\nEraser Multi RC is a dataset for queries over multi-line passages, along with\nanswers and a rationalte. Each example in this dataset has the following 5 parts\n1. A Mutli-line Passage\n2. A Query about the passage\n3. An Answer to the query\n4. A Classification as to whether the answer is right or wrong\n5. An Explanation justifying the classification\n", "citation": "\n@unpublished{eraser2019,\n title = {ERASER: A Benchmark to Evaluate Rationalized NLP Models},\n author = {Jay DeYoung and Sarthak Jain and Nazneen Fatema Rajani and Eric Lehman and Caiming Xiong and Richard Socher and Byron C. Wallace}\n}\n@inproceedings{MultiRC2018,\n author = {Daniel Khashabi and Snigdha Chaturvedi and Michael Roth and Shyam Upadhyay and Dan Roth},\n title = {Looking Beyond the Surface:A Challenge Set for Reading Comprehension over Multiple Sentences},\n booktitle = {NAACL},\n year = {2018}\n}\n", "homepage": "https://cogcomp.seas.upenn.edu/multirc/", "license": "", "features": {"passage": {"dtype": "string", "id": null, "_type": "Value"}, "query_and_answer": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"num_classes": 2, "names": ["False", "True"], "names_file": null, "id": null, "_type": "ClassLabel"}, "evidences": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "supervised_keys": null, "builder_name": "eraser_multi_rc", "config_name": "default", "version": {"version_str": "0.1.1", "description": null, "datasets_version_to_prepare": null, "major": 0, "minor": 1, "patch": 1}, "splits": {"test": {"name": "test", "num_bytes": 9194475, "num_examples": 4848, "dataset_name": "eraser_multi_rc"}, "train": {"name": "train", "num_bytes": 47922877, "num_examples": 24029, "dataset_name": "eraser_multi_rc"}, "validation": {"name": "validation", "num_bytes": 6529020, "num_examples": 3214, "dataset_name": "eraser_multi_rc"}}, "download_checksums": {"http://www.eraserbenchmark.com/zipped/multirc.tar.gz": {"num_bytes": 1667550, "checksum": "7d3364fad630d0949b40cc5c4b9fd7fac0ec9ebb3a31ffa92a491e3883edd826"}}, "download_size": 1667550, "dataset_size": 63646372, "size_in_bytes": 65313922}}
 
 
default/eraser_multi_rc-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b7c8efb8050d2272250f9a4e4d216c128ac4fb29d630238be5cfc43e727e404c
3
+ size 362939
default/eraser_multi_rc-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e4b882e4d788bff62174f96d515ee340a3c3eec38d4fa171036d66d83bff7ef7
3
+ size 1758896
default/eraser_multi_rc-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7cf6b944650676e52d56f8c62a4ca4ceaa7af595c34ecfea6b7c9954ce972bd4
3
+ size 250725
eraser_multi_rc.py DELETED
@@ -1,117 +0,0 @@
1
- # coding=utf-8
2
- # Copyright 2020 The TensorFlow Datasets Authors and the HuggingFace Datasets Authors.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
-
16
- # Lint as: python3
17
- """Passage, query, answers and answer classification with explanations."""
18
-
19
-
20
- import json
21
-
22
- import datasets
23
-
24
-
25
- _CITATION = """
26
- @unpublished{eraser2019,
27
- title = {ERASER: A Benchmark to Evaluate Rationalized NLP Models},
28
- author = {Jay DeYoung and Sarthak Jain and Nazneen Fatema Rajani and Eric Lehman and Caiming Xiong and Richard Socher and Byron C. Wallace}
29
- }
30
- @inproceedings{MultiRC2018,
31
- author = {Daniel Khashabi and Snigdha Chaturvedi and Michael Roth and Shyam Upadhyay and Dan Roth},
32
- title = {Looking Beyond the Surface:A Challenge Set for Reading Comprehension over Multiple Sentences},
33
- booktitle = {NAACL},
34
- year = {2018}
35
- }
36
- """
37
-
38
- _DESCRIPTION = """
39
- Eraser Multi RC is a dataset for queries over multi-line passages, along with
40
- answers and a rationalte. Each example in this dataset has the following 5 parts
41
- 1. A Mutli-line Passage
42
- 2. A Query about the passage
43
- 3. An Answer to the query
44
- 4. A Classification as to whether the answer is right or wrong
45
- 5. An Explanation justifying the classification
46
- """
47
-
48
- _DOWNLOAD_URL = "http://www.eraserbenchmark.com/zipped/multirc.tar.gz"
49
-
50
-
51
- class EraserMultiRc(datasets.GeneratorBasedBuilder):
52
- """Multi Sentence Reasoning with Explanations (Eraser Benchmark)."""
53
-
54
- VERSION = datasets.Version("0.1.1")
55
-
56
- def _info(self):
57
- return datasets.DatasetInfo(
58
- description=_DESCRIPTION,
59
- features=datasets.Features(
60
- {
61
- "passage": datasets.Value("string"),
62
- "query_and_answer": datasets.Value("string"),
63
- "label": datasets.features.ClassLabel(names=["False", "True"]),
64
- "evidences": datasets.features.Sequence(datasets.Value("string")),
65
- }
66
- ),
67
- supervised_keys=None,
68
- homepage="https://cogcomp.seas.upenn.edu/multirc/",
69
- citation=_CITATION,
70
- )
71
-
72
- def _split_generators(self, dl_manager):
73
- """Returns SplitGenerators."""
74
-
75
- archive = dl_manager.download(_DOWNLOAD_URL)
76
- return [
77
- datasets.SplitGenerator(
78
- name=datasets.Split.TRAIN,
79
- # These kwargs will be passed to _generate_examples
80
- gen_kwargs={"files": dl_manager.iter_archive(archive), "split_file": "multirc/train.jsonl"},
81
- ),
82
- datasets.SplitGenerator(
83
- name=datasets.Split.VALIDATION,
84
- # These kwargs will be passed to _generate_examples
85
- gen_kwargs={"files": dl_manager.iter_archive(archive), "split_file": "multirc/val.jsonl"},
86
- ),
87
- datasets.SplitGenerator(
88
- name=datasets.Split.TEST,
89
- # These kwargs will be passed to _generate_examples
90
- gen_kwargs={"files": dl_manager.iter_archive(archive), "split_file": "multirc/test.jsonl"},
91
- ),
92
- ]
93
-
94
- def _generate_examples(self, files, split_file):
95
- """Yields examples."""
96
-
97
- multirc_dir = "multirc/docs"
98
- docs = {}
99
- for path, f in files:
100
- docs[path] = f.read().decode("utf-8")
101
- for line in docs[split_file].splitlines():
102
- row = json.loads(line)
103
- evidences = []
104
-
105
- for evidence in row["evidences"][0]:
106
- docid = evidence["docid"]
107
- evidences.append(evidence["text"])
108
-
109
- passage_file = "/".join([multirc_dir, docid])
110
- passage_text = docs[passage_file]
111
-
112
- yield row["annotation_id"], {
113
- "passage": passage_text,
114
- "query_and_answer": row["query"],
115
- "label": row["classification"],
116
- "evidences": evidences,
117
- }