system HF staff commited on
Commit
7bbd4e9
1 Parent(s): 0088ca3

Update files from the datasets library (from 1.4.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.4.0

Files changed (2) hide show
  1. README.md +21 -21
  2. conll2000.py +4 -3
README.md CHANGED
@@ -27,7 +27,7 @@
27
  - [Citation Information](#citation-information)
28
  - [Contributions](#contributions)
29
 
30
- ## [Dataset Description](#dataset-description)
31
 
32
  - **Homepage:** [https://www.clips.uantwerpen.be/conll2000/chunking/](https://www.clips.uantwerpen.be/conll2000/chunking/)
33
  - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
@@ -37,7 +37,7 @@
37
  - **Size of the generated dataset:** 6.25 MB
38
  - **Total amount of disk used:** 9.57 MB
39
 
40
- ### [Dataset Summary](#dataset-summary)
41
 
42
  Text chunking consists of dividing a text in syntactically correlated parts of words. For example, the sentence
43
  He reckons the current account deficit will narrow to only # 1.8 billion in September . can be divided as follows:
@@ -50,19 +50,19 @@ as the widely used data for noun phrase chunking: sections 15-18 as training dat
50
  test data (47377 tokens). The annotation of the data has been derived from the WSJ corpus by a program written by
51
  Sabine Buchholz from Tilburg University, The Netherlands.
52
 
53
- ### [Supported Tasks](#supported-tasks)
54
 
55
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
56
 
57
- ### [Languages](#languages)
58
 
59
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
60
 
61
- ## [Dataset Structure](#dataset-structure)
62
 
63
  We show detailed information for up to 5 configurations of the dataset.
64
 
65
- ### [Data Instances](#data-instances)
66
 
67
  #### conll2000
68
 
@@ -82,7 +82,7 @@ This example was too long and was cropped:
82
  }
83
  ```
84
 
85
- ### [Data Fields](#data-fields)
86
 
87
  The data fields are the same among all splits.
88
 
@@ -92,55 +92,55 @@ The data fields are the same among all splits.
92
  - `pos_tags`: a `list` of classification labels, with possible values including `''` (0), `#` (1), `$` (2), `(` (3), `)` (4).
93
  - `chunk_tags`: a `list` of classification labels, with possible values including `O` (0), `B-ADJP` (1), `I-ADJP` (2), `B-ADVP` (3), `I-ADVP` (4).
94
 
95
- ### [Data Splits Sample Size](#data-splits-sample-size)
96
 
97
  | name |train|test|
98
  |---------|----:|---:|
99
  |conll2000| 8937|2013|
100
 
101
- ## [Dataset Creation](#dataset-creation)
102
 
103
- ### [Curation Rationale](#curation-rationale)
104
 
105
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
106
 
107
- ### [Source Data](#source-data)
108
 
109
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
110
 
111
- ### [Annotations](#annotations)
112
 
113
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
114
 
115
- ### [Personal and Sensitive Information](#personal-and-sensitive-information)
116
 
117
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
118
 
119
- ## [Considerations for Using the Data](#considerations-for-using-the-data)
120
 
121
- ### [Social Impact of Dataset](#social-impact-of-dataset)
122
 
123
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
124
 
125
- ### [Discussion of Biases](#discussion-of-biases)
126
 
127
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
128
 
129
- ### [Other Known Limitations](#other-known-limitations)
130
 
131
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
132
 
133
- ## [Additional Information](#additional-information)
134
 
135
- ### [Dataset Curators](#dataset-curators)
136
 
137
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
138
 
139
- ### [Licensing Information](#licensing-information)
140
 
141
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
142
 
143
- ### [Citation Information](#citation-information)
144
 
145
  ```
146
  @inproceedings{tksbuchholz2000conll,
 
27
  - [Citation Information](#citation-information)
28
  - [Contributions](#contributions)
29
 
30
+ ## Dataset Description
31
 
32
  - **Homepage:** [https://www.clips.uantwerpen.be/conll2000/chunking/](https://www.clips.uantwerpen.be/conll2000/chunking/)
33
  - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
37
  - **Size of the generated dataset:** 6.25 MB
38
  - **Total amount of disk used:** 9.57 MB
39
 
40
+ ### Dataset Summary
41
 
42
  Text chunking consists of dividing a text in syntactically correlated parts of words. For example, the sentence
43
  He reckons the current account deficit will narrow to only # 1.8 billion in September . can be divided as follows:
 
50
  test data (47377 tokens). The annotation of the data has been derived from the WSJ corpus by a program written by
51
  Sabine Buchholz from Tilburg University, The Netherlands.
52
 
53
+ ### Supported Tasks
54
 
55
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
56
 
57
+ ### Languages
58
 
59
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
60
 
61
+ ## Dataset Structure
62
 
63
  We show detailed information for up to 5 configurations of the dataset.
64
 
65
+ ### Data Instances
66
 
67
  #### conll2000
68
 
 
82
  }
83
  ```
84
 
85
+ ### Data Fields
86
 
87
  The data fields are the same among all splits.
88
 
 
92
  - `pos_tags`: a `list` of classification labels, with possible values including `''` (0), `#` (1), `$` (2), `(` (3), `)` (4).
93
  - `chunk_tags`: a `list` of classification labels, with possible values including `O` (0), `B-ADJP` (1), `I-ADJP` (2), `B-ADVP` (3), `I-ADVP` (4).
94
 
95
+ ### Data Splits Sample Size
96
 
97
  | name |train|test|
98
  |---------|----:|---:|
99
  |conll2000| 8937|2013|
100
 
101
+ ## Dataset Creation
102
 
103
+ ### Curation Rationale
104
 
105
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
106
 
107
+ ### Source Data
108
 
109
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
110
 
111
+ ### Annotations
112
 
113
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
114
 
115
+ ### Personal and Sensitive Information
116
 
117
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
118
 
119
+ ## Considerations for Using the Data
120
 
121
+ ### Social Impact of Dataset
122
 
123
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
124
 
125
+ ### Discussion of Biases
126
 
127
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
128
 
129
+ ### Other Known Limitations
130
 
131
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
132
 
133
+ ## Additional Information
134
 
135
+ ### Dataset Curators
136
 
137
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
138
 
139
+ ### Licensing Information
140
 
141
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
142
 
143
+ ### Citation Information
144
 
145
  ```
146
  @inproceedings{tksbuchholz2000conll,
conll2000.py CHANGED
@@ -16,11 +16,12 @@
16
  # Lint as: python3
17
  """Introduction to the CoNLL-2000 Shared Task: Chunking"""
18
 
19
- import logging
20
-
21
  import datasets
22
 
23
 
 
 
 
24
  _CITATION = """\
25
  @inproceedings{tksbuchholz2000conll,
26
  author = "Tjong Kim Sang, Erik F. and Sabine Buchholz",
@@ -178,7 +179,7 @@ class Conll2000(datasets.GeneratorBasedBuilder):
178
  ]
179
 
180
  def _generate_examples(self, filepath):
181
- logging.info("⏳ Generating examples from = %s", filepath)
182
  with open(filepath, encoding="utf-8") as f:
183
  guid = 0
184
  tokens = []
 
16
  # Lint as: python3
17
  """Introduction to the CoNLL-2000 Shared Task: Chunking"""
18
 
 
 
19
  import datasets
20
 
21
 
22
+ logger = datasets.logging.get_logger(__name__)
23
+
24
+
25
  _CITATION = """\
26
  @inproceedings{tksbuchholz2000conll,
27
  author = "Tjong Kim Sang, Erik F. and Sabine Buchholz",
 
179
  ]
180
 
181
  def _generate_examples(self, filepath):
182
+ logger.info("⏳ Generating examples from = %s", filepath)
183
  with open(filepath, encoding="utf-8") as f:
184
  guid = 0
185
  tokens = []