system HF staff commited on
Commit
bb0a777
1 Parent(s): c4e72ce

Update files from the datasets library (from 1.4.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.4.0

Files changed (2) hide show
  1. README.md +21 -21
  2. trivia_qa.py +6 -4
README.md CHANGED
@@ -27,7 +27,7 @@
27
  - [Citation Information](#citation-information)
28
  - [Contributions](#contributions)
29
 
30
- ## [Dataset Description](#dataset-description)
31
 
32
  - **Homepage:** [http://nlp.cs.washington.edu/triviaqa/](http://nlp.cs.washington.edu/triviaqa/)
33
  - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
@@ -37,7 +37,7 @@
37
  - **Size of the generated dataset:** 43351.32 MB
38
  - **Total amount of disk used:** 52184.66 MB
39
 
40
- ### [Dataset Summary](#dataset-summary)
41
 
42
  TriviaqQA is a reading comprehension dataset containing over 650K
43
  question-answer-evidence triples. TriviaqQA includes 95K question-answer
@@ -45,19 +45,19 @@ pairs authored by trivia enthusiasts and independently gathered evidence
45
  documents, six per question on average, that provide high quality distant
46
  supervision for answering the questions.
47
 
48
- ### [Supported Tasks](#supported-tasks)
49
 
50
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
51
 
52
- ### [Languages](#languages)
53
 
54
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
55
 
56
- ## [Dataset Structure](#dataset-structure)
57
 
58
  We show detailed information for up to 5 configurations of the dataset.
59
 
60
- ### [Data Instances](#data-instances)
61
 
62
  #### rc
63
 
@@ -103,7 +103,7 @@ An example of 'train' looks as follows.
103
 
104
  ```
105
 
106
- ### [Data Fields](#data-fields)
107
 
108
  The data fields are the same among all splits.
109
 
@@ -203,7 +203,7 @@ The data fields are the same among all splits.
203
  - `type`: a `string` feature.
204
  - `value`: a `string` feature.
205
 
206
- ### [Data Splits Sample Size](#data-splits-sample-size)
207
 
208
  | name |train |validation|test |
209
  |--------------------|-----:|---------:|----:|
@@ -212,49 +212,49 @@ The data fields are the same among all splits.
212
  |unfiltered | 87622| 11313|10832|
213
  |unfiltered.nocontext| 87622| 11313|10832|
214
 
215
- ## [Dataset Creation](#dataset-creation)
216
 
217
- ### [Curation Rationale](#curation-rationale)
218
 
219
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
220
 
221
- ### [Source Data](#source-data)
222
 
223
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
224
 
225
- ### [Annotations](#annotations)
226
 
227
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
228
 
229
- ### [Personal and Sensitive Information](#personal-and-sensitive-information)
230
 
231
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
232
 
233
- ## [Considerations for Using the Data](#considerations-for-using-the-data)
234
 
235
- ### [Social Impact of Dataset](#social-impact-of-dataset)
236
 
237
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
238
 
239
- ### [Discussion of Biases](#discussion-of-biases)
240
 
241
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
242
 
243
- ### [Other Known Limitations](#other-known-limitations)
244
 
245
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
246
 
247
- ## [Additional Information](#additional-information)
248
 
249
- ### [Dataset Curators](#dataset-curators)
250
 
251
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
252
 
253
- ### [Licensing Information](#licensing-information)
254
 
255
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
256
 
257
- ### [Citation Information](#citation-information)
258
 
259
  ```
260
 
 
27
  - [Citation Information](#citation-information)
28
  - [Contributions](#contributions)
29
 
30
+ ## Dataset Description
31
 
32
  - **Homepage:** [http://nlp.cs.washington.edu/triviaqa/](http://nlp.cs.washington.edu/triviaqa/)
33
  - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
37
  - **Size of the generated dataset:** 43351.32 MB
38
  - **Total amount of disk used:** 52184.66 MB
39
 
40
+ ### Dataset Summary
41
 
42
  TriviaqQA is a reading comprehension dataset containing over 650K
43
  question-answer-evidence triples. TriviaqQA includes 95K question-answer
 
45
  documents, six per question on average, that provide high quality distant
46
  supervision for answering the questions.
47
 
48
+ ### Supported Tasks
49
 
50
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
51
 
52
+ ### Languages
53
 
54
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
55
 
56
+ ## Dataset Structure
57
 
58
  We show detailed information for up to 5 configurations of the dataset.
59
 
60
+ ### Data Instances
61
 
62
  #### rc
63
 
 
103
 
104
  ```
105
 
106
+ ### Data Fields
107
 
108
  The data fields are the same among all splits.
109
 
 
203
  - `type`: a `string` feature.
204
  - `value`: a `string` feature.
205
 
206
+ ### Data Splits Sample Size
207
 
208
  | name |train |validation|test |
209
  |--------------------|-----:|---------:|----:|
 
212
  |unfiltered | 87622| 11313|10832|
213
  |unfiltered.nocontext| 87622| 11313|10832|
214
 
215
+ ## Dataset Creation
216
 
217
+ ### Curation Rationale
218
 
219
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
220
 
221
+ ### Source Data
222
 
223
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
224
 
225
+ ### Annotations
226
 
227
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
228
 
229
+ ### Personal and Sensitive Information
230
 
231
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
232
 
233
+ ## Considerations for Using the Data
234
 
235
+ ### Social Impact of Dataset
236
 
237
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
238
 
239
+ ### Discussion of Biases
240
 
241
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
242
 
243
+ ### Other Known Limitations
244
 
245
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
246
 
247
+ ## Additional Information
248
 
249
+ ### Dataset Curators
250
 
251
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
252
 
253
+ ### Licensing Information
254
 
255
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
256
 
257
+ ### Citation Information
258
 
259
  ```
260
 
trivia_qa.py CHANGED
@@ -20,7 +20,6 @@ from __future__ import absolute_import, division, print_function
20
 
21
  import glob
22
  import json
23
- import logging
24
  import os
25
 
26
  import six
@@ -28,6 +27,9 @@ import six
28
  import datasets
29
 
30
 
 
 
 
31
  _CITATION = """
32
  @article{2017arXivtriviaqa,
33
  author = {{Joshi}, Mandar and {Choi}, Eunsol and {Weld},
@@ -243,7 +245,7 @@ class TriviaQa(datasets.GeneratorBasedBuilder):
243
  new_items = []
244
  for item in collection:
245
  if "Filename" not in item:
246
- logging.info("Missing context 'Filename', skipping.")
247
  continue
248
 
249
  new_item = item.copy()
@@ -252,7 +254,7 @@ class TriviaQa(datasets.GeneratorBasedBuilder):
252
  with open(os.path.join(file_dir, fname), encoding="utf-8") as f:
253
  new_item[context_field] = f.read()
254
  except (IOError, datasets.Value("errors").NotFoundError):
255
- logging.info("File does not exist, skipping: %s", fname)
256
  continue
257
  new_items.append(new_item)
258
  return new_items
@@ -290,7 +292,7 @@ class TriviaQa(datasets.GeneratorBasedBuilder):
290
  }
291
 
292
  for filepath in files:
293
- logging.info("generating examples from = %s", filepath)
294
  fname = os.path.basename(filepath)
295
 
296
  with open(filepath, encoding="utf-8") as f:
 
20
 
21
  import glob
22
  import json
 
23
  import os
24
 
25
  import six
 
27
  import datasets
28
 
29
 
30
+ logger = datasets.logging.get_logger(__name__)
31
+
32
+
33
  _CITATION = """
34
  @article{2017arXivtriviaqa,
35
  author = {{Joshi}, Mandar and {Choi}, Eunsol and {Weld},
 
245
  new_items = []
246
  for item in collection:
247
  if "Filename" not in item:
248
+ logger.info("Missing context 'Filename', skipping.")
249
  continue
250
 
251
  new_item = item.copy()
 
254
  with open(os.path.join(file_dir, fname), encoding="utf-8") as f:
255
  new_item[context_field] = f.read()
256
  except (IOError, datasets.Value("errors").NotFoundError):
257
+ logger.info("File does not exist, skipping: %s", fname)
258
  continue
259
  new_items.append(new_item)
260
  return new_items
 
292
  }
293
 
294
  for filepath in files:
295
+ logger.info("generating examples from = %s", filepath)
296
  fname = os.path.basename(filepath)
297
 
298
  with open(filepath, encoding="utf-8") as f: