Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
10K<n<100K
Language Creators:
found
Annotations Creators:
expert-generated
Source Datasets:
original
Tags:
License:
leondz commited on
Commit
3b67543
1 Parent(s): 3270c8b

update refs

Browse files
Files changed (3) hide show
  1. README.md +197 -0
  2. dataset_infos.json +1 -0
  3. twitter_pos.py +223 -0
README.md ADDED
@@ -0,0 +1,197 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - en
8
+ licenses:
9
+ - cc-by-4.0
10
+ multilinguality:
11
+ - monolingual
12
+ pretty_name: Twitter Part-of-speech
13
+ size_categories:
14
+ - 10K<n<100K
15
+ source_datasets:
16
+ - original
17
+ task_categories:
18
+ - token-classification
19
+ task_ids:
20
+ - part-of-speech-tagging
21
+ ---
22
+
23
+ # Dataset Card for "twitter-pos"
24
+
25
+ ## Table of Contents
26
+ - [Dataset Description](#dataset-description)
27
+ - [Dataset Summary](#dataset-summary)
28
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
29
+ - [Languages](#languages)
30
+ - [Dataset Structure](#dataset-structure)
31
+ - [Data Instances](#data-instances)
32
+ - [Data Fields](#data-fields)
33
+ - [Data Splits](#data-splits)
34
+ - [Dataset Creation](#dataset-creation)
35
+ - [Curation Rationale](#curation-rationale)
36
+ - [Source Data](#source-data)
37
+ - [Annotations](#annotations)
38
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
39
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
40
+ - [Social Impact of Dataset](#social-impact-of-dataset)
41
+ - [Discussion of Biases](#discussion-of-biases)
42
+ - [Other Known Limitations](#other-known-limitations)
43
+ - [Additional Information](#additional-information)
44
+ - [Dataset Curators](#dataset-curators)
45
+ - [Licensing Information](#licensing-information)
46
+ - [Citation Information](#citation-information)
47
+ - [Contributions](#contributions)
48
+
49
+ ## Dataset Description
50
+
51
+ - **Homepage:** [https://gate.ac.uk/wiki/twitter-postagger.html](https://gate.ac.uk/wiki/twitter-postagger.html)
52
+ - **Repository:** [https://github.com/GateNLP/gateplugin-Twitter](https://github.com/GateNLP/gateplugin-Twitter)
53
+ - **Paper:** [https://aclanthology.org/D11-1141.pdf](https://aclanthology.org/D11-1141.pdf), [https://www.aaai.org/ocs/index.php/ws/aaaiw11/paper/download/3912/4191](https://www.aaai.org/ocs/index.php/ws/aaaiw11/paper/download/3912/4191)
54
+ - **Point of Contact:** [Leon Derczynski](https://github.com/leondz)
55
+ - **Size of downloaded dataset files:** 51.96 MiB
56
+ - **Size of the generated dataset:** 251.22 KiB
57
+ - **Total amount of disk used:** 52.05 MB
58
+
59
+ ### Dataset Summary
60
+
61
+ Part-of-speech information is basic NLP task. However, Twitter text
62
+ is difficult to part-of-speech tag: it is noisy, with linguistic errors and idiosyncratic style.
63
+ This dataset contains two datasets for English PoS tagging for tweets:
64
+
65
+ * Ritter, with train/dev/test
66
+ * Foster, with dev/test
67
+
68
+ Splits defined in the Derczynski paper, but the data is from Ritter and Foster.
69
+
70
+ ### Supported Tasks and Leaderboards
71
+
72
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
73
+
74
+ ### Languages
75
+
76
+ English, non-region-specific. `bcp47:en`
77
+
78
+ ## Dataset Structure
79
+
80
+ ### Data Instances
81
+
82
+ An example of 'train' looks as follows.
83
+
84
+ ```
85
+
86
+ ```
87
+
88
+
89
+ ### Data Fields
90
+
91
+ The data fields are the same among all splits.
92
+
93
+ #### twitter-pos
94
+ - `id`: a `string` feature.
95
+ - `tokens`: a `list` of `string` features.
96
+ - `pos_tags`: a `list` of classification labels (`int`). Full tagset with indices:
97
+
98
+ ```python
99
+
100
+ ```
101
+
102
+
103
+ ### Data Splits
104
+
105
+ | name |tokens|sentences|
106
+ |---------|----:|---------:|
107
+ |ritter train|10652|551|
108
+ |ritter dev |2242|118|
109
+ |ritter test |2291|118|
110
+ |foster dev |2998|270|
111
+ |foster test |2841|250|
112
+
113
+ ## Dataset Creation
114
+
115
+ ### Curation Rationale
116
+
117
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
118
+
119
+ ### Source Data
120
+
121
+ #### Initial Data Collection and Normalization
122
+
123
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
124
+
125
+ #### Who are the source language producers?
126
+
127
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
128
+
129
+ ### Annotations
130
+
131
+ #### Annotation process
132
+
133
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
134
+
135
+ #### Who are the annotators?
136
+
137
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
138
+
139
+ ### Personal and Sensitive Information
140
+
141
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
142
+
143
+ ## Considerations for Using the Data
144
+
145
+ ### Social Impact of Dataset
146
+
147
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
148
+
149
+ ### Discussion of Biases
150
+
151
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
152
+
153
+ ### Other Known Limitations
154
+
155
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
156
+
157
+ ## Additional Information
158
+
159
+ ### Dataset Curators
160
+
161
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
162
+
163
+ ### Licensing Information
164
+
165
+
166
+ ### Citation Information
167
+
168
+ ```
169
+ @inproceedings{ritter2011named,
170
+ title={Named entity recognition in tweets: an experimental study},
171
+ author={Ritter, Alan and Clark, Sam and Etzioni, Oren and others},
172
+ booktitle={Proceedings of the 2011 conference on empirical methods in natural language processing},
173
+ pages={1524--1534},
174
+ year={2011}
175
+ }
176
+
177
+ @inproceedings{foster2011hardtoparse,
178
+ title={\# hardtoparse: POS Tagging and Parsing the Twitterverse},
179
+ author={Foster, Jennifer and Cetinoglu, Ozlem and Wagner, Joachim and Le Roux, Joseph and Hogan, Stephen and Nivre, Joakim and Hogan, Deirdre and Van Genabith, Josef},
180
+ booktitle={Workshops at the Twenty-Fifth AAAI Conference on Artificial Intelligence},
181
+ year={2011}
182
+ }
183
+
184
+ @inproceedings{derczynski2013twitter,
185
+ title={Twitter part-of-speech tagging for all: Overcoming sparse and noisy data},
186
+ author={Derczynski, Leon and Ritter, Alan and Clark, Sam and Bontcheva, Kalina},
187
+ booktitle={Proceedings of the international conference recent advances in natural language processing ranlp 2013},
188
+ pages={198--206},
189
+ year={2013}
190
+ }
191
+
192
+ ```
193
+
194
+
195
+ ### Contributions
196
+
197
+ Author uploaded ([@leondz](https://github.com/leondz))
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"foster": {"description": "Part-of-speech information is basic NLP task. However, Twitter text\nis difficult to part-of-speech tag: it is noisy, with linguistic errors and idiosyncratic style.\nThis dataset contains two datasets for English PoS tagging for tweets:\n\n* Ritter, with train/dev/test\n* Foster, with dev/test\n\nFor more details see:\n\n* https://gate.ac.uk/wiki/twitter-postagger.html\n* https://aclanthology.org/D11-1141.pdf\n* https://www.aaai.org/ocs/index.php/ws/aaaiw11/paper/download/3912/4191\n", "citation": "@inproceedings{derczynski2013twitter,\n title={Twitter part-of-speech tagging for all: Overcoming sparse and noisy data},\n author={Derczynski, Leon and Ritter, Alan and Clark, Sam and Bontcheva, Kalina},\n booktitle={Proceedings of the international conference recent advances in natural language processing ranlp 2013},\n pages={198--206},\n year={2013}\n}\n", "homepage": "https://gate.ac.uk/wiki/twitter-postagger.html", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "pos_tags": {"feature": {"num_classes": 52, "names": ["\"", "''", "#", "%", "$", "(", ")", ",", ".", ":", "``", "CC", "CD", "DT", "EX", "FW", "IN", "JJ", "JJR", "JJS", "LS", "MD", "NN", "NNP", "NNPS", "NNS", "NN|SYM", "PDT", "POS", "PRP", "PRP$", "RB", "RBR", "RBS", "RP", "SYM", "TO", "UH", "VB", "VBD", "VBG", "VBN", "VBP", "VBZ", "WDT", "WP", "WP$", "WRB", "RT", "HT", "USR", "URL"], "id": null, "_type": "ClassLabel"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "twitter_pos", "config_name": "foster", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"validation": {"name": "validation", "num_bytes": 52293, "num_examples": 269, "dataset_name": "twitter_pos"}, "test": {"name": "test", "num_bytes": 49911, "num_examples": 250, "dataset_name": "twitter_pos"}}, "download_checksums": {"http://downloads.gate.ac.uk/twitie/twitie-tagger.zip": {"num_bytes": 54479912, "checksum": "8f536fe329bf2f0e9f442f21d6df7da524a0b168830675796a7a9b8568cf636d"}}, "download_size": 54479912, "post_processing_size": null, "dataset_size": 102204, "size_in_bytes": 54582116}, "ritter": {"description": "Part-of-speech information is basic NLP task. However, Twitter text\nis difficult to part-of-speech tag: it is noisy, with linguistic errors and idiosyncratic style.\nThis dataset contains two datasets for English PoS tagging for tweets:\n\n* Ritter, with train/dev/test\n* Foster, with dev/test\n\nFor more details see:\n\n* https://gate.ac.uk/wiki/twitter-postagger.html\n* https://aclanthology.org/D11-1141.pdf\n* https://www.aaai.org/ocs/index.php/ws/aaaiw11/paper/download/3912/4191\n", "citation": "@inproceedings{derczynski2013twitter,\n title={Twitter part-of-speech tagging for all: Overcoming sparse and noisy data},\n author={Derczynski, Leon and Ritter, Alan and Clark, Sam and Bontcheva, Kalina},\n booktitle={Proceedings of the international conference recent advances in natural language processing ranlp 2013},\n pages={198--206},\n year={2013}\n}\n", "homepage": "https://gate.ac.uk/wiki/twitter-postagger.html", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "pos_tags": {"feature": {"num_classes": 52, "names": ["\"", "''", "#", "%", "$", "(", ")", ",", ".", ":", "``", "CC", "CD", "DT", "EX", "FW", "IN", "JJ", "JJR", "JJS", "LS", "MD", "NN", "NNP", "NNPS", "NNS", "NN|SYM", "PDT", "POS", "PRP", "PRP$", "RB", "RBR", "RBS", "RP", "SYM", "TO", "UH", "VB", "VBD", "VBG", "VBN", "VBP", "VBZ", "WDT", "WP", "WP$", "WRB", "RT", "HT", "USR", "URL"], "id": null, "_type": "ClassLabel"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "twitter_pos", "config_name": "ritter", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"validation": {"name": "validation", "num_bytes": 38047, "num_examples": 118, "dataset_name": "twitter_pos"}, "test": {"name": "test", "num_bytes": 38899, "num_examples": 118, "dataset_name": "twitter_pos"}, "train": {"name": "train", "num_bytes": 180303, "num_examples": 551, "dataset_name": "twitter_pos"}}, "download_checksums": {"http://downloads.gate.ac.uk/twitie/twitie-tagger.zip": {"num_bytes": 54479912, "checksum": "8f536fe329bf2f0e9f442f21d6df7da524a0b168830675796a7a9b8568cf636d"}}, "download_size": 54479912, "post_processing_size": null, "dataset_size": 257249, "size_in_bytes": 54737161}}
twitter_pos.py ADDED
@@ -0,0 +1,223 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 HuggingFace Datasets Authors.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ # Lint as: python3
17
+ """Introduction to the CoNLL-2003 Shared Task: Language-Independent Named Entity Recognition"""
18
+
19
+ import os
20
+
21
+ import datasets
22
+
23
+
24
+ logger = datasets.logging.get_logger(__name__)
25
+
26
+
27
+ _CITATION = """\
28
+ @inproceedings{ritter2011named,
29
+ title={Named entity recognition in tweets: an experimental study},
30
+ author={Ritter, Alan and Clark, Sam and Etzioni, Oren and others},
31
+ booktitle={Proceedings of the 2011 conference on empirical methods in natural language processing},
32
+ pages={1524--1534},
33
+ year={2011}
34
+ }
35
+
36
+ @inproceedings{foster2011hardtoparse,
37
+ title={\# hardtoparse: POS Tagging and Parsing the Twitterverse},
38
+ author={Foster, Jennifer and Cetinoglu, Ozlem and Wagner, Joachim and Le Roux, Joseph and Hogan, Stephen and Nivre, Joakim and Hogan, Deirdre and Van Genabith, Josef},
39
+ booktitle={Workshops at the Twenty-Fifth AAAI Conference on Artificial Intelligence},
40
+ year={2011}
41
+ }
42
+
43
+ @inproceedings{derczynski2013twitter,
44
+ title={Twitter part-of-speech tagging for all: Overcoming sparse and noisy data},
45
+ author={Derczynski, Leon and Ritter, Alan and Clark, Sam and Bontcheva, Kalina},
46
+ booktitle={Proceedings of the international conference recent advances in natural language processing ranlp 2013},
47
+ pages={198--206},
48
+ year={2013}
49
+ }
50
+ """
51
+
52
+ _DESCRIPTION = """\
53
+ Part-of-speech information is basic NLP task. However, Twitter text
54
+ is difficult to part-of-speech tag: it is noisy, with linguistic errors and idiosyncratic style.
55
+ This dataset contains two datasets for English PoS tagging for tweets:
56
+
57
+ * Ritter, with train/dev/test
58
+ * Foster, with dev/test
59
+
60
+ Splits defined in the Derczynski paper, but the data is from Ritter and Foster.
61
+
62
+ For more details see:
63
+
64
+ * https://gate.ac.uk/wiki/twitter-postagger.html
65
+ * https://aclanthology.org/D11-1141.pdf
66
+ * https://www.aaai.org/ocs/index.php/ws/aaaiw11/paper/download/3912/4191
67
+ """
68
+
69
+ _URL = "http://downloads.gate.ac.uk/twitie/twitie-tagger.zip"
70
+ _RITTER_TRAIN = "twitie-tagger/corpora/ritter_train.stanford"
71
+ _RITTER_DEV = "twitie-tagger/corpora/ritter_dev.stanford"
72
+ _RITTER_TEST = "twitie-tagger/corpora/ritter_eval.stanford"
73
+ _FOSTER_TRAIN = None
74
+ _FOSTER_DEV = "twitie-tagger/corpora/foster_dev.stanford"
75
+ _FOSTER_TEST = "twitie-tagger/corpora/foster_eval.stanford"
76
+
77
+
78
+ class TwitterPosConfig(datasets.BuilderConfig):
79
+ """BuilderConfig for TwitterPos"""
80
+
81
+ def __init__(self, **kwargs):
82
+ """BuilderConfig for TwitterPos.
83
+
84
+ Args:
85
+ **kwargs: keyword arguments forwarded to super.
86
+ """
87
+ super(TwitterPosConfig, self).__init__(**kwargs)
88
+ #assert variant in ('foster', 'ritter'), (f'Unrecognised variation: {variant}')
89
+
90
+
91
+ class TwitterPos(datasets.GeneratorBasedBuilder):
92
+ """TwitterPos dataset."""
93
+
94
+ BUILDER_CONFIGS = [
95
+ TwitterPosConfig(name="foster", description="Foster English Twitter PoS bootstrap dataset"),
96
+ TwitterPosConfig(name="ritter", description="Ritter English Twitter PoS bootstrap dataset"),
97
+ ]
98
+
99
+ def _info(self):
100
+ variant = self.config.name
101
+ return datasets.DatasetInfo(
102
+ description=_DESCRIPTION,
103
+ features=datasets.Features(
104
+ {
105
+ "id": datasets.Value("string"),
106
+ "tokens": datasets.Sequence(datasets.Value("string")),
107
+ "pos_tags": datasets.Sequence(
108
+ datasets.features.ClassLabel(
109
+ names=[
110
+ '"',
111
+ "''",
112
+ "#",
113
+ "%",
114
+ "$",
115
+ "(",
116
+ ")",
117
+ ",",
118
+ ".",
119
+ ":",
120
+ "``",
121
+ "CC",
122
+ "CD",
123
+ "DT",
124
+ "EX",
125
+ "FW",
126
+ "IN",
127
+ "JJ",
128
+ "JJR",
129
+ "JJS",
130
+ "LS",
131
+ "MD",
132
+ "NN",
133
+ "NNP",
134
+ "NNPS",
135
+ "NNS",
136
+ "NN|SYM",
137
+ "PDT",
138
+ "POS",
139
+ "PRP",
140
+ "PRP$",
141
+ "RB",
142
+ "RBR",
143
+ "RBS",
144
+ "RP",
145
+ "SYM",
146
+ "TO",
147
+ "UH",
148
+ "VB",
149
+ "VBD",
150
+ "VBG",
151
+ "VBN",
152
+ "VBP",
153
+ "VBZ",
154
+ "WDT",
155
+ "WP",
156
+ "WP$",
157
+ "WRB",
158
+ "RT",
159
+ "HT",
160
+ "USR",
161
+ "URL",
162
+ ]
163
+ )
164
+ ),
165
+ }
166
+ ),
167
+ supervised_keys=None,
168
+ homepage="https://gate.ac.uk/wiki/twitter-postagger.html",
169
+ citation=_CITATION,
170
+ )
171
+
172
+ def _split_generators(self, dl_manager):
173
+ """Returns SplitGenerators."""
174
+ downloaded_file = dl_manager.download_and_extract(_URL)
175
+
176
+ if self.config.name == 'ritter':
177
+ data_files = {
178
+ "train": os.path.join(downloaded_file, _RITTER_TRAIN),
179
+ "dev": os.path.join(downloaded_file, _RITTER_DEV),
180
+ "test": os.path.join(downloaded_file, _RITTER_TEST),
181
+ }
182
+ elif self.config.name == 'foster':
183
+ data_files = {
184
+ "dev": os.path.join(downloaded_file, _FOSTER_DEV),
185
+ "test": os.path.join(downloaded_file, _FOSTER_TEST),
186
+ }
187
+
188
+ splits = [
189
+ datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepath": data_files["dev"]}),
190
+ datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": data_files["test"]}),
191
+ ]
192
+
193
+ if "train" in data_files:
194
+ splits.append(datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": data_files["train"]}))
195
+
196
+ return splits
197
+
198
+ def _generate_examples(self, filepath):
199
+ logger.info("⏳ Generating examples from = %s", filepath)
200
+ with open(filepath, encoding="utf-8") as f:
201
+ guid = 0
202
+ for line in f:
203
+ tokens = []
204
+ pos_tags = []
205
+ if line.startswith("-DOCSTART-") or line.strip() == "" or line == "\n":
206
+ continue
207
+ else:
208
+ line = line.replace('_VPB ', '_VBP ') # tag type fixes
209
+ line = line.replace('_TD ', '_DT ') # tag type fixes
210
+ line = line.replace('_ADVP ', '_RB ') # tag type fixes
211
+ line = line.replace('_NONE ', '_: ') # tag type fixes
212
+ line = line.replace(' please_VPP ', ' please_VBP ') # tag type fixes
213
+ line = line.replace(' ".._O ', ' ".._" ') # tag type fixes
214
+ # twitter-pos gives one seq per line, as token_tag
215
+ annotated_words = line.strip().split(' ')
216
+ tokens = ['_'.join(token.split('_')[:-1]) for token in annotated_words]
217
+ pos_tags = [token.split('_')[-1] for token in annotated_words]
218
+ yield guid, {
219
+ "id": str(guid),
220
+ "tokens": tokens,
221
+ "pos_tags": pos_tags,
222
+ }
223
+ guid += 1