Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
1K<n<10K
Language Creators:
expert-generated
Annotations Creators:
expert-generated
Source Datasets:
original
Tags:
License:
albertvillanova HF staff commited on
Commit
1f97567
1 Parent(s): bc790b9

Fix fine classes in trec dataset (#4801)

Browse files

* Fix fine label from 47 to 50 classes

* Update dataset card

* Update metadata JSON

* Update dummy data path

* Remove tags tag from dataset card

Commit from https://github.com/huggingface/datasets/commit/cd00f13ce5c280b55c19e176165aec902a438ef2

README.md CHANGED
@@ -1,8 +1,24 @@
1
  ---
 
 
2
  language:
3
  - en
4
- paperswithcode_id: trecqa
 
 
 
 
 
5
  pretty_name: Text Retrieval Conference Question Answering
 
 
 
 
 
 
 
 
 
6
  ---
7
 
8
  # Dataset Card for "trec"
@@ -43,9 +59,11 @@ pretty_name: Text Retrieval Conference Question Answering
43
 
44
  ### Dataset Summary
45
 
46
- The Text REtrieval Conference (TREC) Question Classification dataset contains 5500 labeled questions in training set and another 500 for test set. The dataset has 6 labels, 47 level-2 labels. Average length of each sentence is 10, vocabulary size of 8700.
47
 
48
- Data are collected from four sources: 4,500 English questions published by USC (Hovy et al., 2001), about 500 manually constructed questions for a few rare classes, 894 TREC 8 and TREC 9 questions, and also 500 questions from TREC 10 which serves as the test set.
 
 
49
 
50
  ### Supported Tasks and Leaderboards
51
 
@@ -53,14 +71,12 @@ Data are collected from four sources: 4,500 English questions published by USC (
53
 
54
  ### Languages
55
 
56
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
57
 
58
  ## Dataset Structure
59
 
60
  ### Data Instances
61
 
62
- #### default
63
-
64
  - **Size of downloaded dataset files:** 0.34 MB
65
  - **Size of the generated dataset:** 0.39 MB
66
  - **Total amount of disk used:** 0.74 MB
@@ -68,9 +84,9 @@ Data are collected from four sources: 4,500 English questions published by USC (
68
  An example of 'train' looks as follows.
69
  ```
70
  {
71
- "label-coarse": 1,
72
- "label-fine": 2,
73
- "text": "What fowl grabs the spotlight after the Chinese Year of the Monkey ?"
74
  }
75
  ```
76
 
@@ -78,16 +94,78 @@ An example of 'train' looks as follows.
78
 
79
  The data fields are the same among all splits.
80
 
81
- #### default
82
- - `label-coarse`: a classification label, with possible values including `DESC` (0), `ENTY` (1), `ABBR` (2), `HUM` (3), `NUM` (4).
83
- - `label-fine`: a classification label, with possible values including `manner` (0), `cremat` (1), `animal` (2), `exp` (3), `ind` (4).
84
- - `text`: a `string` feature.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
85
 
86
  ### Data Splits
87
 
88
- | name |train|test|
89
- |-------|----:|---:|
90
- |default| 5452| 500|
91
 
92
  ## Dataset Creation
93
 
@@ -165,7 +243,6 @@ The data fields are the same among all splits.
165
  year = "2001",
166
  url = "https://www.aclweb.org/anthology/H01-1069",
167
  }
168
-
169
  ```
170
 
171
 
1
  ---
2
+ annotations_creators:
3
+ - expert-generated
4
  language:
5
  - en
6
+ language_creators:
7
+ - expert-generated
8
+ license:
9
+ - unknown
10
+ multilinguality:
11
+ - monolingual
12
  pretty_name: Text Retrieval Conference Question Answering
13
+ size_categories:
14
+ - 1K<n<10K
15
+ source_datasets:
16
+ - original
17
+ task_categories:
18
+ - text-classification
19
+ task_ids:
20
+ - multi-class-classification
21
+ paperswithcode_id: trecqa
22
  ---
23
 
24
  # Dataset Card for "trec"
59
 
60
  ### Dataset Summary
61
 
62
+ The Text REtrieval Conference (TREC) Question Classification dataset contains 5500 labeled questions in training set and another 500 for test set.
63
 
64
+ The dataset has 6 coarse class labels and 50 fine class labels. Average length of each sentence is 10, vocabulary size of 8700.
65
+
66
+ Data are collected from four sources: 4,500 English questions published by USC (Hovy et al., 2001), about 500 manually constructed questions for a few rare classes, 894 TREC 8 and TREC 9 questions, and also 500 questions from TREC 10 which serves as the test set. These questions were manually labeled.
67
 
68
  ### Supported Tasks and Leaderboards
69
 
71
 
72
  ### Languages
73
 
74
+ The language in this dataset is English (`en`).
75
 
76
  ## Dataset Structure
77
 
78
  ### Data Instances
79
 
 
 
80
  - **Size of downloaded dataset files:** 0.34 MB
81
  - **Size of the generated dataset:** 0.39 MB
82
  - **Total amount of disk used:** 0.74 MB
84
  An example of 'train' looks as follows.
85
  ```
86
  {
87
+ 'text': 'How did serfdom develop in and then leave Russia ?',
88
+ 'coarse_label': 2,
89
+ 'fine_label': 26
90
  }
91
  ```
92
 
94
 
95
  The data fields are the same among all splits.
96
 
97
+ - `text` (`str`): Text of the question.
98
+ - `coarse_label` (`ClassLabel`): Coarse class label. Possible values are:
99
+ - 'ABBR' (0): Abbreviation.
100
+ - 'ENTY' (1): Entity.
101
+ - 'DESC' (2): Description and abstract concept.
102
+ - 'HUM' (3): Human being.
103
+ - 'LOC' (4): Location.
104
+ - 'NUM' (5): Numeric value.
105
+ - `fine_label` (`ClassLabel`): Fine class label. Possible values are:
106
+ - ABBREVIATION:
107
+ - 'ABBR:abb' (0): Abbreviation.
108
+ - 'ABBR:exp' (1): Expression abbreviated.
109
+ - ENTITY:
110
+ - 'ENTY:animal' (2): Animal.
111
+ - 'ENTY:body' (3): Organ of body.
112
+ - 'ENTY:color' (4): Color.
113
+ - 'ENTY:cremat' (5): Invention, book and other creative piece.
114
+ - 'ENTY:currency' (6): Currency name.
115
+ - 'ENTY:dismed' (7): Disease and medicine.
116
+ - 'ENTY:event' (8): Event.
117
+ - 'ENTY:food' (9): Food.
118
+ - 'ENTY:instru' (10): Musical instrument.
119
+ - 'ENTY:lang' (11): Language.
120
+ - 'ENTY:letter' (12): Letter like a-z.
121
+ - 'ENTY:other' (13): Other entity.
122
+ - 'ENTY:plant' (14): Plant.
123
+ - 'ENTY:product' (15): Product.
124
+ - 'ENTY:religion' (16): Religion.
125
+ - 'ENTY:sport' (17): Sport.
126
+ - 'ENTY:substance' (18): Element and substance.
127
+ - 'ENTY:symbol' (19): Symbols and sign.
128
+ - 'ENTY:techmeth' (20): Techniques and method.
129
+ - 'ENTY:termeq' (21): Equivalent term.
130
+ - 'ENTY:veh' (22): Vehicle.
131
+ - 'ENTY:word' (23): Word with a special property.
132
+ - DESCRIPTION:
133
+ - 'DESC:def' (24): Definition of something.
134
+ - 'DESC:desc' (25): Description of something.
135
+ - 'DESC:manner' (26): Manner of an action.
136
+ - 'DESC:reason' (27): Reason.
137
+ - HUMAN:
138
+ - 'HUM:gr' (28): Group or organization of persons
139
+ - 'HUM:ind' (29): Individual.
140
+ - 'HUM:title' (30): Title of a person.
141
+ - 'HUM:desc' (31): Description of a person.
142
+ - LOCATION:
143
+ - 'LOC:city' (32): City.
144
+ - 'LOC:country' (33): Country.
145
+ - 'LOC:mount' (34): Mountain.
146
+ - 'LOC:other' (35): Other location.
147
+ - 'LOC:state' (36): State.
148
+ - NUMERIC:
149
+ - 'NUM:code' (37): Postcode or other code.
150
+ - 'NUM:count' (38): Number of something.
151
+ - 'NUM:date' (39): Date.
152
+ - 'NUM:dist' (40): Distance, linear measure.
153
+ - 'NUM:money' (41): Price.
154
+ - 'NUM:ord' (42): Order, rank.
155
+ - 'NUM:other' (43): Other number.
156
+ - 'NUM:period' (44): Lasting time of something
157
+ - 'NUM:perc' (45): Percent, fraction.
158
+ - 'NUM:speed' (46): Speed.
159
+ - 'NUM:temp' (47): Temperature.
160
+ - 'NUM:volsize' (48): Size, area and volume.
161
+ - 'NUM:weight' (49): Weight.
162
+
163
 
164
  ### Data Splits
165
 
166
+ | name | train | test |
167
+ |---------|------:|-----:|
168
+ | default | 5452 | 500 |
169
 
170
  ## Dataset Creation
171
 
243
  year = "2001",
244
  url = "https://www.aclweb.org/anthology/H01-1069",
245
  }
 
246
  ```
247
 
248
 
dataset_infos.json CHANGED
@@ -1 +1 @@
1
- {"default": {"description": "The Text REtrieval Conference (TREC) Question Classification dataset contains 5500 labeled questions in training set and another 500 for test set. The dataset has 6 labels, 47 level-2 labels. Average length of each sentence is 10, vocabulary size of 8700.\n\nData are collected from four sources: 4,500 English questions published by USC (Hovy et al., 2001), about 500 manually constructed questions for a few rare classes, 894 TREC 8 and TREC 9 questions, and also 500 questions from TREC 10 which serves as the test set.\n", "citation": "@inproceedings{li-roth-2002-learning,\n title = \"Learning Question Classifiers\",\n author = \"Li, Xin and\n Roth, Dan\",\n booktitle = \"{COLING} 2002: The 19th International Conference on Computational Linguistics\",\n year = \"2002\",\n url = \"https://www.aclweb.org/anthology/C02-1150\",\n}\n@inproceedings{hovy-etal-2001-toward,\n title = \"Toward Semantics-Based Answer Pinpointing\",\n author = \"Hovy, Eduard and\n Gerber, Laurie and\n Hermjakob, Ulf and\n Lin, Chin-Yew and\n Ravichandran, Deepak\",\n booktitle = \"Proceedings of the First International Conference on Human Language Technology Research\",\n year = \"2001\",\n url = \"https://www.aclweb.org/anthology/H01-1069\",\n}\n", "homepage": "https://cogcomp.seas.upenn.edu/Data/QA/QC/", "license": "", "features": {"label-coarse": {"num_classes": 6, "names": ["DESC", "ENTY", "ABBR", "HUM", "NUM", "LOC"], "names_file": null, "id": null, "_type": "ClassLabel"}, "label-fine": {"num_classes": 47, "names": ["manner", "cremat", "animal", "exp", "ind", "gr", "title", "def", "date", "reason", "event", "state", "desc", "count", "other", "letter", "religion", "food", "country", "color", "termeq", "city", "body", "dismed", "mount", "money", "product", "period", "substance", "sport", "plant", "techmeth", "volsize", "instru", "abb", "speed", "word", "lang", "perc", "code", "dist", "temp", "symbol", "ord", "veh", "weight", "currency"], "names_file": null, "id": null, "_type": "ClassLabel"}, "text": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "trec", "config_name": "default", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 385090, "num_examples": 5452, "dataset_name": "trec"}, "test": {"name": "test", "num_bytes": 27983, "num_examples": 500, "dataset_name": "trec"}}, "download_checksums": {"https://cogcomp.seas.upenn.edu/Data/QA/QC/train_5500.label": {"num_bytes": 335858, "checksum": "9e4c8bdcaffb96ed61041bd64b564183d52793a8e91d84fc3a8646885f466ec3"}, "https://cogcomp.seas.upenn.edu/Data/QA/QC/TREC_10.label": {"num_bytes": 23354, "checksum": "033f22c028c2bbba9ca682f68ffe204dc1aa6e1cf35dd6207f2d4ca67f0d0e8e"}}, "download_size": 359212, "post_processing_size": null, "dataset_size": 413073, "size_in_bytes": 772285}}
1
+ {"default": {"description": "The Text REtrieval Conference (TREC) Question Classification dataset contains 5500 labeled questions in training set and another 500 for test set.\n\nThe dataset has 6 coarse class labels and 50 fine class labels. Average length of each sentence is 10, vocabulary size of 8700.\n\nData are collected from four sources: 4,500 English questions published by USC (Hovy et al., 2001), about 500 manually constructed questions for a few rare classes, 894 TREC 8 and TREC 9 questions, and also 500 questions from TREC 10 which serves as the test set. These questions were manually labeled.\n", "citation": "@inproceedings{li-roth-2002-learning,\n title = \"Learning Question Classifiers\",\n author = \"Li, Xin and\n Roth, Dan\",\n booktitle = \"{COLING} 2002: The 19th International Conference on Computational Linguistics\",\n year = \"2002\",\n url = \"https://www.aclweb.org/anthology/C02-1150\",\n}\n@inproceedings{hovy-etal-2001-toward,\n title = \"Toward Semantics-Based Answer Pinpointing\",\n author = \"Hovy, Eduard and\n Gerber, Laurie and\n Hermjakob, Ulf and\n Lin, Chin-Yew and\n Ravichandran, Deepak\",\n booktitle = \"Proceedings of the First International Conference on Human Language Technology Research\",\n year = \"2001\",\n url = \"https://www.aclweb.org/anthology/H01-1069\",\n}\n", "homepage": "https://cogcomp.seas.upenn.edu/Data/QA/QC/", "license": "", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}, "coarse_label": {"num_classes": 6, "names": ["ABBR", "ENTY", "DESC", "HUM", "LOC", "NUM"], "id": null, "_type": "ClassLabel"}, "fine_label": {"num_classes": 50, "names": ["ABBR:abb", "ABBR:exp", "ENTY:animal", "ENTY:body", "ENTY:color", "ENTY:cremat", "ENTY:currency", "ENTY:dismed", "ENTY:event", "ENTY:food", "ENTY:instru", "ENTY:lang", "ENTY:letter", "ENTY:other", "ENTY:plant", "ENTY:product", "ENTY:religion", "ENTY:sport", "ENTY:substance", "ENTY:symbol", "ENTY:techmeth", "ENTY:termeq", "ENTY:veh", "ENTY:word", "DESC:def", "DESC:desc", "DESC:manner", "DESC:reason", "HUM:gr", "HUM:ind", "HUM:title", "HUM:desc", "LOC:city", "LOC:country", "LOC:mount", "LOC:other", "LOC:state", "NUM:code", "NUM:count", "NUM:date", "NUM:dist", "NUM:money", "NUM:ord", "NUM:other", "NUM:period", "NUM:perc", "NUM:speed", "NUM:temp", "NUM:volsize", "NUM:weight"], "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "trec", "config_name": "default", "version": {"version_str": "2.0.0", "description": "Fine label contains 50 classes instead of 47.", "major": 2, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 385090, "num_examples": 5452, "dataset_name": "trec"}, "test": {"name": "test", "num_bytes": 27983, "num_examples": 500, "dataset_name": "trec"}}, "download_checksums": {"https://cogcomp.seas.upenn.edu/Data/QA/QC/train_5500.label": {"num_bytes": 335858, "checksum": "9e4c8bdcaffb96ed61041bd64b564183d52793a8e91d84fc3a8646885f466ec3"}, "https://cogcomp.seas.upenn.edu/Data/QA/QC/TREC_10.label": {"num_bytes": 23354, "checksum": "033f22c028c2bbba9ca682f68ffe204dc1aa6e1cf35dd6207f2d4ca67f0d0e8e"}}, "download_size": 359212, "post_processing_size": null, "dataset_size": 413073, "size_in_bytes": 772285}}
dummy/{1.1.0 → 2.0.0}/dummy_data.zip RENAMED
File without changes
trec.py CHANGED
@@ -12,12 +12,22 @@
12
  # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
  # See the License for the specific language governing permissions and
14
  # limitations under the License.
15
- """ The Text REtrieval Conference (TREC) Question Classification dataset."""
16
 
17
 
18
  import datasets
19
 
20
 
 
 
 
 
 
 
 
 
 
 
21
  _CITATION = """\
22
  @inproceedings{li-roth-2002-learning,
23
  title = "Learning Question Classifiers",
@@ -40,114 +50,98 @@ _CITATION = """\
40
  }
41
  """
42
 
43
- _DESCRIPTION = """\
44
- The Text REtrieval Conference (TREC) Question Classification dataset contains 5500 labeled questions in training set and another 500 for test set. The dataset has 6 labels, 47 level-2 labels. Average length of each sentence is 10, vocabulary size of 8700.
45
-
46
- Data are collected from four sources: 4,500 English questions published by USC (Hovy et al., 2001), about 500 manually constructed questions for a few rare classes, 894 TREC 8 and TREC 9 questions, and also 500 questions from TREC 10 which serves as the test set.
47
- """
48
-
49
  _URLs = {
50
  "train": "https://cogcomp.seas.upenn.edu/Data/QA/QC/train_5500.label",
51
  "test": "https://cogcomp.seas.upenn.edu/Data/QA/QC/TREC_10.label",
52
  }
53
 
54
- _COARSE_LABELS = ["DESC", "ENTY", "ABBR", "HUM", "NUM", "LOC"]
55
 
56
  _FINE_LABELS = [
57
- "manner",
58
- "cremat",
59
- "animal",
60
- "exp",
61
- "ind",
62
- "gr",
63
- "title",
64
- "def",
65
- "date",
66
- "reason",
67
- "event",
68
- "state",
69
- "desc",
70
- "count",
71
- "other",
72
- "letter",
73
- "religion",
74
- "food",
75
- "country",
76
- "color",
77
- "termeq",
78
- "city",
79
- "body",
80
- "dismed",
81
- "mount",
82
- "money",
83
- "product",
84
- "period",
85
- "substance",
86
- "sport",
87
- "plant",
88
- "techmeth",
89
- "volsize",
90
- "instru",
91
- "abb",
92
- "speed",
93
- "word",
94
- "lang",
95
- "perc",
96
- "code",
97
- "dist",
98
- "temp",
99
- "symbol",
100
- "ord",
101
- "veh",
102
- "weight",
103
- "currency",
 
 
 
104
  ]
105
 
106
 
107
  class Trec(datasets.GeneratorBasedBuilder):
108
- """TODO: Short description of my dataset."""
109
 
110
- VERSION = datasets.Version("1.1.0")
111
 
112
  def _info(self):
113
- # TODO: Specifies the datasets.DatasetInfo object
114
  return datasets.DatasetInfo(
115
- # This is the description that will appear on the datasets page.
116
  description=_DESCRIPTION,
117
- # datasets.features.FeatureConnectors
118
  features=datasets.Features(
119
  {
120
- "label-coarse": datasets.ClassLabel(names=_COARSE_LABELS),
121
- "label-fine": datasets.ClassLabel(names=_FINE_LABELS),
122
  "text": datasets.Value("string"),
 
 
123
  }
124
  ),
125
- # If there's a common (input, target) tuple from the features,
126
- # specify them here. They'll be used if as_supervised=True in
127
- # builder.as_dataset.
128
- supervised_keys=None,
129
- # Homepage of the dataset for documentation
130
- homepage="https://cogcomp.seas.upenn.edu/Data/QA/QC/",
131
  citation=_CITATION,
132
  )
133
 
134
  def _split_generators(self, dl_manager):
135
  """Returns SplitGenerators."""
136
- # TODO: Downloads the data and defines the splits
137
- # dl_manager is a datasets.download.DownloadManager that can be used to
138
- # download and extract URLs
139
- dl_files = dl_manager.download_and_extract(_URLs)
140
  return [
141
  datasets.SplitGenerator(
142
  name=datasets.Split.TRAIN,
143
- # These kwargs will be passed to _generate_examples
144
  gen_kwargs={
145
  "filepath": dl_files["train"],
146
  },
147
  ),
148
  datasets.SplitGenerator(
149
  name=datasets.Split.TEST,
150
- # These kwargs will be passed to _generate_examples
151
  gen_kwargs={
152
  "filepath": dl_files["test"],
153
  },
@@ -156,14 +150,13 @@ class Trec(datasets.GeneratorBasedBuilder):
156
 
157
  def _generate_examples(self, filepath):
158
  """Yields examples."""
159
- # TODO: Yields (key, example) tuples from the dataset
160
  with open(filepath, "rb") as f:
161
  for id_, row in enumerate(f):
162
  # One non-ASCII byte: sisterBADBYTEcity. We replace it with a space
163
- label, _, text = row.replace(b"\xf0", b" ").strip().decode().partition(" ")
164
- coarse_label, _, fine_label = label.partition(":")
165
  yield id_, {
166
- "label-coarse": coarse_label,
167
- "label-fine": fine_label,
168
  "text": text,
 
 
169
  }
12
  # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
  # See the License for the specific language governing permissions and
14
  # limitations under the License.
15
+ """The Text REtrieval Conference (TREC) Question Classification dataset."""
16
 
17
 
18
  import datasets
19
 
20
 
21
+ _DESCRIPTION = """\
22
+ The Text REtrieval Conference (TREC) Question Classification dataset contains 5500 labeled questions in training set and another 500 for test set.
23
+
24
+ The dataset has 6 coarse class labels and 50 fine class labels. Average length of each sentence is 10, vocabulary size of 8700.
25
+
26
+ Data are collected from four sources: 4,500 English questions published by USC (Hovy et al., 2001), about 500 manually constructed questions for a few rare classes, 894 TREC 8 and TREC 9 questions, and also 500 questions from TREC 10 which serves as the test set. These questions were manually labeled.
27
+ """
28
+
29
+ _HOMEPAGE = "https://cogcomp.seas.upenn.edu/Data/QA/QC/"
30
+
31
  _CITATION = """\
32
  @inproceedings{li-roth-2002-learning,
33
  title = "Learning Question Classifiers",
50
  }
51
  """
52
 
 
 
 
 
 
 
53
  _URLs = {
54
  "train": "https://cogcomp.seas.upenn.edu/Data/QA/QC/train_5500.label",
55
  "test": "https://cogcomp.seas.upenn.edu/Data/QA/QC/TREC_10.label",
56
  }
57
 
58
+ _COARSE_LABELS = ["ABBR", "ENTY", "DESC", "HUM", "LOC", "NUM"]
59
 
60
  _FINE_LABELS = [
61
+ "ABBR:abb",
62
+ "ABBR:exp",
63
+ "ENTY:animal",
64
+ "ENTY:body",
65
+ "ENTY:color",
66
+ "ENTY:cremat",
67
+ "ENTY:currency",
68
+ "ENTY:dismed",
69
+ "ENTY:event",
70
+ "ENTY:food",
71
+ "ENTY:instru",
72
+ "ENTY:lang",
73
+ "ENTY:letter",
74
+ "ENTY:other",
75
+ "ENTY:plant",
76
+ "ENTY:product",
77
+ "ENTY:religion",
78
+ "ENTY:sport",
79
+ "ENTY:substance",
80
+ "ENTY:symbol",
81
+ "ENTY:techmeth",
82
+ "ENTY:termeq",
83
+ "ENTY:veh",
84
+ "ENTY:word",
85
+ "DESC:def",
86
+ "DESC:desc",
87
+ "DESC:manner",
88
+ "DESC:reason",
89
+ "HUM:gr",
90
+ "HUM:ind",
91
+ "HUM:title",
92
+ "HUM:desc",
93
+ "LOC:city",
94
+ "LOC:country",
95
+ "LOC:mount",
96
+ "LOC:other",
97
+ "LOC:state",
98
+ "NUM:code",
99
+ "NUM:count",
100
+ "NUM:date",
101
+ "NUM:dist",
102
+ "NUM:money",
103
+ "NUM:ord",
104
+ "NUM:other",
105
+ "NUM:period",
106
+ "NUM:perc",
107
+ "NUM:speed",
108
+ "NUM:temp",
109
+ "NUM:volsize",
110
+ "NUM:weight",
111
  ]
112
 
113
 
114
  class Trec(datasets.GeneratorBasedBuilder):
115
+ """The Text REtrieval Conference (TREC) Question Classification dataset."""
116
 
117
+ VERSION = datasets.Version("2.0.0", description="Fine label contains 50 classes instead of 47.")
118
 
119
  def _info(self):
 
120
  return datasets.DatasetInfo(
 
121
  description=_DESCRIPTION,
 
122
  features=datasets.Features(
123
  {
 
 
124
  "text": datasets.Value("string"),
125
+ "coarse_label": datasets.ClassLabel(names=_COARSE_LABELS),
126
+ "fine_label": datasets.ClassLabel(names=_FINE_LABELS),
127
  }
128
  ),
129
+ homepage=_HOMEPAGE,
 
 
 
 
 
130
  citation=_CITATION,
131
  )
132
 
133
  def _split_generators(self, dl_manager):
134
  """Returns SplitGenerators."""
135
+ dl_files = dl_manager.download(_URLs)
 
 
 
136
  return [
137
  datasets.SplitGenerator(
138
  name=datasets.Split.TRAIN,
 
139
  gen_kwargs={
140
  "filepath": dl_files["train"],
141
  },
142
  ),
143
  datasets.SplitGenerator(
144
  name=datasets.Split.TEST,
 
145
  gen_kwargs={
146
  "filepath": dl_files["test"],
147
  },
150
 
151
  def _generate_examples(self, filepath):
152
  """Yields examples."""
 
153
  with open(filepath, "rb") as f:
154
  for id_, row in enumerate(f):
155
  # One non-ASCII byte: sisterBADBYTEcity. We replace it with a space
156
+ fine_label, _, text = row.replace(b"\xf0", b" ").strip().decode().partition(" ")
157
+ coarse_label = fine_label.split(":")[0]
158
  yield id_, {
 
 
159
  "text": text,
160
+ "coarse_label": coarse_label,
161
+ "fine_label": fine_label,
162
  }