Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
1M<n<10M
Language Creators:
found
Annotations Creators:
machine-generated
Source Datasets:
original
Tags:
License:
leondz commited on
Commit
48b1c1a
1 Parent(s): 94f8e8e

first pass

Browse files
Files changed (3) hide show
  1. README.md +182 -0
  2. dataset_infos.json +1 -0
  3. twitter_pos_vcb.py +176 -0
README.md ADDED
@@ -0,0 +1,182 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - machine-generated
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - en
8
+ licenses:
9
+ - other
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 1M<n<10M
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - token-classification
18
+ task_ids:
19
+ - part-of-speech-tagging
20
+ paperswithcode_id:
21
+ pretty_name: Twitter PoS VCB
22
+ ---
23
+
24
+ # Dataset Card for "twitter-pos-vcb"
25
+
26
+ ## Table of Contents
27
+ - [Dataset Description](#dataset-description)
28
+ - [Dataset Summary](#dataset-summary)
29
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
30
+ - [Languages](#languages)
31
+ - [Dataset Structure](#dataset-structure)
32
+ - [Data Instances](#data-instances)
33
+ - [Data Fields](#data-fields)
34
+ - [Data Splits](#data-splits)
35
+ - [Dataset Creation](#dataset-creation)
36
+ - [Curation Rationale](#curation-rationale)
37
+ - [Source Data](#source-data)
38
+ - [Annotations](#annotations)
39
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
40
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
41
+ - [Social Impact of Dataset](#social-impact-of-dataset)
42
+ - [Discussion of Biases](#discussion-of-biases)
43
+ - [Other Known Limitations](#other-known-limitations)
44
+ - [Additional Information](#additional-information)
45
+ - [Dataset Curators](#dataset-curators)
46
+ - [Licensing Information](#licensing-information)
47
+ - [Citation Information](#citation-information)
48
+ - [Contributions](#contributions)
49
+
50
+ ## Dataset Description
51
+
52
+ - **Homepage:** [https://gate.ac.uk/wiki/twitter-postagger.html](https://gate.ac.uk/wiki/twitter-postagger.html)
53
+ - **Repository:** [https://github.com/GateNLP/gateplugin-Twitter](https://github.com/GateNLP/gateplugin-Twitter)
54
+ - **Paper:** [https://aclanthology.org/R13-1026.pdf](https://aclanthology.org/R13-1026.pdf)
55
+ - **Point of Contact:** [Leon Derczynski](https://github.com/leondz)
56
+ - **Size of downloaded dataset files:** 4.51 MiB
57
+ - **Size of the generated dataset:** 26.88 MB
58
+ - **Total amount of disk used:** 31.39 MB
59
+
60
+ ### Dataset Summary
61
+
62
+ Part-of-speech information is basic NLP task. However, Twitter text
63
+ is difficult to part-of-speech tag: it is noisy, with linguistic errors and idiosyncratic style.
64
+ This data is the vote-constrained bootstrapped data generate to support state-of-the-art results.
65
+
66
+ The data is about 1.5 million English tweets annotated for part-of-speech using Ritter's extension of the PTB tagset.
67
+ The tweets are from 2012 and 2013, tokenized using the GATE tokenizer and tagged
68
+ jointly using the CMU ARK tagger and Ritter's T-POS tagger. Only when both these taggers' outputs
69
+ are completely compatible over a whole tweet, is that tweet added to the dataset.
70
+
71
+ This data is recommend for use a training data **only**, and not evaluation data.
72
+
73
+ For more details see https://gate.ac.uk/wiki/twitter-postagger.html and https://aclanthology.org/R13-1026.pdf
74
+
75
+ ### Supported Tasks and Leaderboards
76
+
77
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
78
+
79
+ ### Languages
80
+
81
+ English, non-region-specific. `bcp47:en`
82
+
83
+ ## Dataset Structure
84
+
85
+ ### Data Instances
86
+
87
+ An example of 'train' looks as follows.
88
+
89
+ ```
90
+
91
+ ```
92
+
93
+
94
+ ### Data Fields
95
+
96
+ The data fields are the same among all splits.
97
+
98
+ #### conll2003
99
+ - `id`: a `string` feature.
100
+ - `tokens`: a `list` of `string` features.
101
+ - `pos_tags`: a `list` of classification labels (`int`). Full tagset with indices:
102
+
103
+ ```python
104
+
105
+ ```
106
+
107
+
108
+ ### Data Splits
109
+
110
+ | name |tokens|sentences|
111
+ |---------|----:|---------:|
112
+ |twitter-pos-vcb|1 543 126| 159 492|
113
+
114
+ ## Dataset Creation
115
+
116
+ ### Curation Rationale
117
+
118
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
119
+
120
+ ### Source Data
121
+
122
+ #### Initial Data Collection and Normalization
123
+
124
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
125
+
126
+ #### Who are the source language producers?
127
+
128
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
129
+
130
+ ### Annotations
131
+
132
+ #### Annotation process
133
+
134
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
135
+
136
+ #### Who are the annotators?
137
+
138
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
139
+
140
+ ### Personal and Sensitive Information
141
+
142
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
143
+
144
+ ## Considerations for Using the Data
145
+
146
+ ### Social Impact of Dataset
147
+
148
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
149
+
150
+ ### Discussion of Biases
151
+
152
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
153
+
154
+ ### Other Known Limitations
155
+
156
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
157
+
158
+ ## Additional Information
159
+
160
+ ### Dataset Curators
161
+
162
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
163
+
164
+ ### Licensing Information
165
+
166
+
167
+ ### Citation Information
168
+
169
+ ```
170
+ @inproceedings{derczynski2013twitter,
171
+ title={Twitter part-of-speech tagging for all: Overcoming sparse and noisy data},
172
+ author={Derczynski, Leon and Ritter, Alan and Clark, Sam and Bontcheva, Kalina},
173
+ booktitle={Proceedings of the international conference recent advances in natural language processing ranlp 2013},
174
+ pages={198--206},
175
+ year={2013}
176
+ }
177
+ ```
178
+
179
+
180
+ ### Contributions
181
+
182
+ Author uploaded ([@leondz](https://github.com/leondz))
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"twitter-pos-vcb": {"description": "Part-of-speech information is basic NLP task. However, Twitter text\nis difficult to part-of-speech tag: it is noisy, with linguistic errors and idiosyncratic style.\nThis data is the vote-constrained bootstrapped data generate to support state-of-the-art results.\n\nThe data is about 1.5 million English tweets annotated for part-of-speech using Ritter's extension of the PTB tagset.\nThe tweets are from 2012 and 2013, tokenized using the GATE tokenizer and tagged\njointly using the CMU ARK tagger and Ritter's T-POS tagger. Only when both these taggers' outputs\nare completely compatible over a whole tweet, is that tweet added to the dataset.\n\nThis data is recommend for use a training data **only**, and not evaluation data.\n\nFor more details see https://gate.ac.uk/wiki/twitter-postagger.html and https://aclanthology.org/R13-1026.pdf\n", "citation": "@inproceedings{derczynski2013twitter,\n title={Twitter part-of-speech tagging for all: Overcoming sparse and noisy data},\n author={Derczynski, Leon and Ritter, Alan and Clark, Sam and Bontcheva, Kalina},\n booktitle={Proceedings of the international conference recent advances in natural language processing ranlp 2013},\n pages={198--206},\n year={2013}\n}\n", "homepage": "https://gate.ac.uk/wiki/twitter-postagger.html", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "pos_tags": {"feature": {"num_classes": 51, "names": ["\"", "''", "#", "$", "(", ")", ",", ".", ":", "``", "CC", "CD", "DT", "EX", "FW", "IN", "JJ", "JJR", "JJS", "LS", "MD", "NN", "NNP", "NNPS", "NNS", "NN|SYM", "PDT", "POS", "PRP", "PRP$", "RB", "RBR", "RBS", "RP", "SYM", "TO", "UH", "VB", "VBD", "VBG", "VBN", "VBP", "VBZ", "WDT", "WP", "WP$", "WRB", "RT", "HT", "USR", "URL"], "id": null, "_type": "ClassLabel"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "twitter_pos_vcb", "config_name": "twitter-pos-vcb", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 28185130, "num_examples": 159492, "dataset_name": "twitter_pos_vcb"}}, "download_checksums": {"http://downloads.gate.ac.uk/twitter/twitter_bootstrap_corpus.tar.gz": {"num_bytes": 4725199, "checksum": "f9410879cbe0eef5a1fb2963bd3f3ccee081eb67b6db46baf06fef54c36c4922"}}, "download_size": 4725199, "post_processing_size": null, "dataset_size": 28185130, "size_in_bytes": 32910329}}
twitter_pos_vcb.py ADDED
@@ -0,0 +1,176 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 HuggingFace Datasets Authors.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ # Lint as: python3
17
+ """Introduction to the CoNLL-2003 Shared Task: Language-Independent Named Entity Recognition"""
18
+
19
+ import os
20
+
21
+ import datasets
22
+
23
+
24
+ logger = datasets.logging.get_logger(__name__)
25
+
26
+
27
+ _CITATION = """\
28
+ @inproceedings{derczynski2013twitter,
29
+ title={Twitter part-of-speech tagging for all: Overcoming sparse and noisy data},
30
+ author={Derczynski, Leon and Ritter, Alan and Clark, Sam and Bontcheva, Kalina},
31
+ booktitle={Proceedings of the international conference recent advances in natural language processing ranlp 2013},
32
+ pages={198--206},
33
+ year={2013}
34
+ }
35
+ """
36
+
37
+ _DESCRIPTION = """\
38
+ Part-of-speech information is basic NLP task. However, Twitter text
39
+ is difficult to part-of-speech tag: it is noisy, with linguistic errors and idiosyncratic style.
40
+ This data is the vote-constrained bootstrapped data generate to support state-of-the-art results.
41
+
42
+ The data is about 1.5 million English tweets annotated for part-of-speech using Ritter's extension of the PTB tagset.
43
+ The tweets are from 2012 and 2013, tokenized using the GATE tokenizer and tagged
44
+ jointly using the CMU ARK tagger and Ritter's T-POS tagger. Only when both these taggers' outputs
45
+ are completely compatible over a whole tweet, is that tweet added to the dataset.
46
+
47
+ This data is recommend for use a training data **only**, and not evaluation data.
48
+
49
+ For more details see https://gate.ac.uk/wiki/twitter-postagger.html and https://aclanthology.org/R13-1026.pdf
50
+ """
51
+
52
+ _URL = "http://downloads.gate.ac.uk/twitter/twitter_bootstrap_corpus.tar.gz"
53
+ _TRAINING_FILE = "gate_twitter_bootstrap_corpus.1543K.tokens"
54
+
55
+
56
+ class TwitterPosVcbConfig(datasets.BuilderConfig):
57
+ """BuilderConfig for TwitterPosVcb"""
58
+
59
+ def __init__(self, **kwargs):
60
+ """BuilderConfig forConll2003.
61
+
62
+ Args:
63
+ **kwargs: keyword arguments forwarded to super.
64
+ """
65
+ super(TwitterPosVcbConfig, self).__init__(**kwargs)
66
+
67
+
68
+ class TwitterPosVcb(datasets.GeneratorBasedBuilder):
69
+ """TwitterPosVcb dataset."""
70
+
71
+ BUILDER_CONFIGS = [
72
+ TwitterPosVcbConfig(name="twitter-pos-vcb", version=datasets.Version("1.0.0"), description="English Twitter PoS bootstrap dataset"),
73
+ ]
74
+
75
+ def _info(self):
76
+ return datasets.DatasetInfo(
77
+ description=_DESCRIPTION,
78
+ features=datasets.Features(
79
+ {
80
+ "id": datasets.Value("string"),
81
+ "tokens": datasets.Sequence(datasets.Value("string")),
82
+ "pos_tags": datasets.Sequence(
83
+ datasets.features.ClassLabel(
84
+ names=[
85
+ '"',
86
+ "''",
87
+ "#",
88
+ "$",
89
+ "(",
90
+ ")",
91
+ ",",
92
+ ".",
93
+ ":",
94
+ "``",
95
+ "CC",
96
+ "CD",
97
+ "DT",
98
+ "EX",
99
+ "FW",
100
+ "IN",
101
+ "JJ",
102
+ "JJR",
103
+ "JJS",
104
+ "LS",
105
+ "MD",
106
+ "NN",
107
+ "NNP",
108
+ "NNPS",
109
+ "NNS",
110
+ "NN|SYM",
111
+ "PDT",
112
+ "POS",
113
+ "PRP",
114
+ "PRP$",
115
+ "RB",
116
+ "RBR",
117
+ "RBS",
118
+ "RP",
119
+ "SYM",
120
+ "TO",
121
+ "UH",
122
+ "VB",
123
+ "VBD",
124
+ "VBG",
125
+ "VBN",
126
+ "VBP",
127
+ "VBZ",
128
+ "WDT",
129
+ "WP",
130
+ "WP$",
131
+ "WRB",
132
+ "RT",
133
+ "HT",
134
+ "USR",
135
+ "URL",
136
+ ]
137
+ )
138
+ ),
139
+ }
140
+ ),
141
+ supervised_keys=None,
142
+ homepage="https://gate.ac.uk/wiki/twitter-postagger.html",
143
+ citation=_CITATION,
144
+ )
145
+
146
+ def _split_generators(self, dl_manager):
147
+ """Returns SplitGenerators."""
148
+ downloaded_file = dl_manager.download_and_extract(_URL)
149
+ data_files = {
150
+ "train": os.path.join(downloaded_file, _TRAINING_FILE),
151
+ }
152
+
153
+ return [
154
+ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": data_files["train"]}),
155
+ ]
156
+
157
+ def _generate_examples(self, filepath):
158
+ logger.info("⏳ Generating examples from = %s", filepath)
159
+ with open(filepath, encoding="utf-8") as f:
160
+ guid = 0
161
+ for line in f:
162
+ tokens = []
163
+ pos_tags = []
164
+ if line.startswith("-DOCSTART-") or line.strip() == "" or line == "\n":
165
+ continue
166
+ else:
167
+ # twitter-pos-vcb gives one seq per line, as token_tag
168
+ annotated_words = line.strip().split(' ')
169
+ tokens = ['_'.join(token.split('_')[:-1]) for token in annotated_words]
170
+ pos_tags = [token.split('_')[-1] for token in annotated_words]
171
+ yield guid, {
172
+ "id": str(guid),
173
+ "tokens": tokens,
174
+ "pos_tags": pos_tags,
175
+ }
176
+ guid += 1