Datasets:

Languages:
Yue Chinese
Multilinguality:
monolingual
Size Categories:
10K<n<100K
Language Creators:
found
Annotations Creators:
expert-generated
Source Datasets:
original
Tags:
License:
system HF staff commited on
Commit
7312480
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

Files changed (5) hide show
  1. .gitattributes +27 -0
  2. README.md +189 -0
  3. dataset_infos.json +1 -0
  4. dummy/1.0.0/dummy_data.zip +3 -0
  5. hkcancor.py +315 -0
.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,189 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - yue
8
+ licenses:
9
+ - cc-by-4-0
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 10K<n<100K
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - conditional-text-generation
18
+ - sequence-modeling
19
+ task_ids:
20
+ - dialogue-modeling
21
+ - machine-translation
22
+ ---
23
+
24
+ # Dataset Card for The Hong Kong Cantonese Corpus (HKCanCor)
25
+
26
+ ## Table of Contents
27
+ - [Dataset Description](#dataset-description)
28
+ - [Dataset Summary](#dataset-summary)
29
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
30
+ - [Languages](#languages)
31
+ - [Dataset Structure](#dataset-structure)
32
+ - [Data Instances](#data-instances)
33
+ - [Data Fields](#data-instances)
34
+ - [Data Splits](#data-instances)
35
+ - [Dataset Creation](#dataset-creation)
36
+ - [Curation Rationale](#curation-rationale)
37
+ - [Source Data](#source-data)
38
+ - [Annotations](#annotations)
39
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
40
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
41
+ - [Social Impact of Dataset](#social-impact-of-dataset)
42
+ - [Discussion of Biases](#discussion-of-biases)
43
+ - [Other Known Limitations](#other-known-limitations)
44
+ - [Additional Information](#additional-information)
45
+ - [Dataset Curators](#dataset-curators)
46
+ - [Licensing Information](#licensing-information)
47
+ - [Citation Information](#citation-information)
48
+
49
+ ## Dataset Description
50
+
51
+ - **Homepage:** http://compling.hss.ntu.edu.sg/hkcancor/
52
+ - **Repository:** https://github.com/fcbond/hkcancor
53
+ - **Paper:** [Luke and Wang, 2015](https://github.com/fcbond/hkcancor/blob/master/data/LukeWong_Hong-Kong-Cantonese-Corpus.pdf)
54
+ - **Leaderboard:** N/A
55
+ - **Point of Contact:** Luke Kang Kwong
56
+
57
+ ### Dataset Summary
58
+ The Hong Kong Cantonese Corpus (HKCanCor) comprise transcribed conversations recorded
59
+ between March 1997 and August 1998. It contains recordings of spontaneous speech (51 texts)
60
+ and radio programmes (42 texts), which involve 2 to 4 speakers, with 1 text of monologue.
61
+
62
+ In total, the corpus contains around 230,000 Chinese words. The text is word-segmented (i.e., tokenization is at word-level, and each token can span multiple Chinese characters). Tokens are annotated with part-of-speech (POS) tags and romanised Cantonese pronunciation.
63
+
64
+ * Romanisation
65
+ * Follows conventions set by the Linguistic Society of Hong Kong (LSHK).
66
+ * POS
67
+ * The tagset used by this corpus extends the one in the Peita-Fujitsu-Renmin Ribao (PRF) corpus (Duan et al., 2000). Extensions were made to further capture Cantonese-specific phenomena.
68
+ * To facilitate everyday usage and for better comparability across languages and/or corpora, this dataset also includes the tags mapped to the [Universal Dependencies 2.0](https://universaldependencies.org/u/pos/index.html) format. This mapping references the [PyCantonese](https://github.com/jacksonllee/pycantonese) library.
69
+
70
+
71
+ ### Supported Tasks and Leaderboards
72
+ [More Information Needed]
73
+
74
+ ### Languages
75
+ Yue Chinese / Cantonese (Hong Kong).
76
+
77
+ ## Dataset Structure
78
+ This corpus has 10801 utterances and approximately 230000 Chinese words.
79
+ There is no predefined split.
80
+
81
+ ### Data Instances
82
+ Each instance contains a conversation id, speaker id within that conversation,
83
+ turn number, part-of-speech tag for each Chinese word in the PRF format and UD2.0 format,
84
+ and the utterance written in Chinese characters as well as its LSHK format romanisation.
85
+
86
+
87
+ For example:
88
+ ```python
89
+ {
90
+ 'conversation_id': 'TNR016-DR070398-HAI6V'
91
+ 'pos_tags_prf': ['v', 'w'],
92
+ 'pos_tags_ud': ['VERB', 'PUNCT'],
93
+ 'speaker': 'B',
94
+ 'transcriptions': ['hai6', 'VQ1'],
95
+ 'turn_number': 112,
96
+ 'tokens': ['係', '。']
97
+ }
98
+ ```
99
+
100
+ ### Data Fields
101
+ - conversation_id: unique dialogue-level id
102
+ - pos_tags_prf: POS tag using the PRF format at token-level
103
+ - pos_tag_ud: POS tag using the UD2.0 format at token-level
104
+ - speaker: unique speaker id within dialogue
105
+ - transcriptions: token-level romanisation in the LSHK format
106
+ - turn_number: turn number in dialogue
107
+ - tokens: Chinese word or punctuation at token-level
108
+
109
+ ### Data Splits
110
+ There are no specified splits in this dataset.
111
+
112
+ ## Dataset Creation
113
+
114
+ ### Curation Rationale
115
+
116
+ [More Information Needed]
117
+
118
+ ### Source Data
119
+
120
+ #### Initial Data Collection and Normalization
121
+
122
+ [More Information Needed]
123
+
124
+ #### Who are the source language producers?
125
+
126
+ [More Information Needed]
127
+
128
+ ### Annotations
129
+
130
+ #### Annotation process
131
+
132
+ [More Information Needed]
133
+
134
+ #### Who are the annotators?
135
+
136
+ [More Information Needed]
137
+
138
+ ### Personal and Sensitive Information
139
+
140
+ [More Information Needed]
141
+
142
+ ## Considerations for Using the Data
143
+
144
+ ### Social Impact of Dataset
145
+
146
+ [More Information Needed]
147
+
148
+ ### Discussion of Biases
149
+
150
+ [More Information Needed]
151
+
152
+ ### Other Known Limitations
153
+
154
+ [More Information Needed]
155
+
156
+ ## Additional Information
157
+
158
+ ### Dataset Curators
159
+
160
+ [More Information Needed]
161
+
162
+ ### Licensing Information
163
+ This work is licensed under a [Creative Commons Attribution 4.0 International License](https://creativecommons.org/licenses/by/4.0/deed.ast).
164
+
165
+
166
+ ### Citation Information
167
+ This corpus was developed by [Luke and Wong, 2015](http://compling.hss.ntu.edu.sg/hkcancor/data/LukeWong_Hong-Kong-Cantonese-Corpus.pdf).
168
+ ```
169
+ @article{luke2015hong,
170
+ author={Luke, Kang-Kwong and Wong, May LY},
171
+ title={The Hong Kong Cantonese corpus: design and uses},
172
+ journal={Journal of Chinese Linguistics},
173
+ year={2015},
174
+ pages={309-330},
175
+ month={12}
176
+ }
177
+ ```
178
+ The POS tagset to Universal Dependency tagset mapping is provided by Jackson Lee, as a part of the [PyCantonese](https://github.com/jacksonllee/pycantonese) library.
179
+ ```
180
+ @misc{lee2020,
181
+ author = {Lee, Jackson},
182
+ title = {PyCantonese: Cantonese Linguistics and NLP in Python},
183
+ year = {2020},
184
+ publisher = {GitHub},
185
+ journal = {GitHub repository},
186
+ howpublished = {\url{https://github.com/jacksonllee/pycantonese}},
187
+ commit = {1d58f44e1cb097faa69de6b617e1d28903b84b98}
188
+ }
189
+ ```
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"default": {"description": "The Hong Kong Cantonese Corpus (HKCanCor) comprise transcribed conversations\nrecorded between March 1997 and August 1998. It contains recordings of\nspontaneous speech (51 texts) and radio programmes (42 texts),\nwhich involve 2 to 4 speakers, with 1 text of monologue.\n\nIn total, the corpus contains around 230,000 Chinese words.\nThe text is word-segmented, annotated with part-of-speech (POS) tags and\nromanised Cantonese pronunciation.\n\nRomanisation scheme - Linguistic Society of Hong Kong (LSHK)\nPOS scheme - Peita-Fujitsu-Renmin Ribao (PRF) corpus (Duan et al., 2000),\n with extended tags for Cantonese-specific phenomena added by\n Luke and Wang (see original paper for details).\n", "citation": "@article{luke2015hong,\n author={Luke, Kang-Kwong and Wong, May LY},\n title={The Hong Kong Cantonese corpus: design and uses},\n journal={Journal of Chinese Linguistics},\n year={2015},\n pages={309-330},\n month={12}\n}\n@misc{lee2020,\n author = {Lee, Jackson},\n title = {PyCantonese: Cantonese Linguistics and NLP in Python},\n year = {2020},\n publisher = {GitHub},\n journal = {GitHub repository},\n howpublished = {https://github.com/jacksonllee/pycantonese},\n commit = {1d58f44e1cb097faa69de6b617e1d28903b84b98}\n}\n", "homepage": "http://compling.hss.ntu.edu.sg/hkcancor/", "license": "CC BY 4.0", "features": {"conversation_id": {"dtype": "string", "id": null, "_type": "Value"}, "speaker": {"dtype": "string", "id": null, "_type": "Value"}, "turn_number": {"dtype": "int16", "id": null, "_type": "Value"}, "tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "transcriptions": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "pos_tags_prf": {"feature": {"num_classes": 120, "names": ["!", "\"", "#", "'", ",", "-", ".", "...", "?", "A", "AD", "AG", "AIRWAYS0", "AN", "AND", "B", "BG", "BEAN0", "C", "CENTRE0", "CG", "D", "D1", "DG", "E", "ECHO0", "F", "G", "G1", "G2", "H", "HILL0", "I", "IG", "J", "JB", "JM", "JN", "JNS", "JNT", "JNZ", "K", "KONG", "L", "L1", "LG", "M", "MG", "MONTY0", "MOUNTAIN0", "N", "N1", "NG", "NR", "NS", "NSG", "NT", "NX", "NZ", "O", "P", "PEPPER0", "Q", "QG", "R", "RG", "S", "SOUND0", "T", "TELECOM0", "TG", "TOUCH0", "U", "UG", "U0", "V", "V1", "VD", "VG", "VK", "VN", "VU", "VUG", "W", "X", "XA", "XB", "XC", "XD", "XE", "XJ", "XJB", "XJN", "XJNT", "XJNZ", "XJV", "XJA", "XL1", "XM", "XN", "XNG", "XNR", "XNS", "XNT", "XNX", "XNZ", "XO", "XP", "XQ", "XR", "XS", "XT", "XV", "XVG", "XVN", "XX", "Y", "YG", "Y1", "Z"], "names_file": null, "id": null, "_type": "ClassLabel"}, "length": -1, "id": null, "_type": "Sequence"}, "pos_tags_ud": {"feature": {"num_classes": 16, "names": ["DET", "PRON", "VERB", "NOUN", "ADJ", "PUNCT", "INTJ", "ADV", "V", "PART", "X", "NUM", "PROPN", "AUX", "CCONJ", "ADP"], "names_file": null, "id": null, "_type": "ClassLabel"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "hkcancor", "config_name": "default", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 5746381, "num_examples": 10801, "dataset_name": "hkcancor"}}, "download_checksums": {"http://compling.hss.ntu.edu.sg/hkcancor/data/hkcancor-utf8.zip": {"num_bytes": 961514, "checksum": "09223963b8756254e15353cad843f8a4b0cbc4b9223dc8a8fa27fb1cf846057e"}}, "download_size": 961514, "post_processing_size": null, "dataset_size": 5746381, "size_in_bytes": 6707895}}
dummy/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ac3b79bc1b7bb337ff108efb54a2d0c11cd9163eeed64680e511961e6d2df262
3
+ size 36493
hkcancor.py ADDED
@@ -0,0 +1,315 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """Hong Kong Cantonese Corpus (HKCanCor)."""
16
+
17
+ from __future__ import absolute_import, division, print_function
18
+
19
+ import os
20
+ import xml.etree.ElementTree as ET
21
+
22
+ import datasets
23
+
24
+
25
+ _CITATION = """\
26
+ @article{luke2015hong,
27
+ author={Luke, Kang-Kwong and Wong, May LY},
28
+ title={The Hong Kong Cantonese corpus: design and uses},
29
+ journal={Journal of Chinese Linguistics},
30
+ year={2015},
31
+ pages={309-330},
32
+ month={12}
33
+ }
34
+ @misc{lee2020,
35
+ author = {Lee, Jackson},
36
+ title = {PyCantonese: Cantonese Linguistics and NLP in Python},
37
+ year = {2020},
38
+ publisher = {GitHub},
39
+ journal = {GitHub repository},
40
+ howpublished = {https://github.com/jacksonllee/pycantonese},
41
+ commit = {1d58f44e1cb097faa69de6b617e1d28903b84b98}
42
+ }
43
+ """
44
+
45
+ _DESCRIPTION = """\
46
+ The Hong Kong Cantonese Corpus (HKCanCor) comprise transcribed conversations
47
+ recorded between March 1997 and August 1998. It contains recordings of
48
+ spontaneous speech (51 texts) and radio programmes (42 texts),
49
+ which involve 2 to 4 speakers, with 1 text of monologue.
50
+
51
+ In total, the corpus contains around 230,000 Chinese words.
52
+ The text is word-segmented, annotated with part-of-speech (POS) tags and
53
+ romanised Cantonese pronunciation.
54
+
55
+ Romanisation scheme - Linguistic Society of Hong Kong (LSHK)
56
+ POS scheme - Peita-Fujitsu-Renmin Ribao (PRF) corpus (Duan et al., 2000),
57
+ with extended tags for Cantonese-specific phenomena added by
58
+ Luke and Wang (see original paper for details).
59
+ """
60
+
61
+ _HOMEPAGE = "http://compling.hss.ntu.edu.sg/hkcancor/"
62
+
63
+ _LICENSE = "CC BY 4.0"
64
+
65
+ _URL = "http://compling.hss.ntu.edu.sg/hkcancor/data/hkcancor-utf8.zip"
66
+
67
+
68
+ class Hkcancor(datasets.GeneratorBasedBuilder):
69
+ """Hong Kong Cantonese Corpus (HKCanCor)."""
70
+
71
+ VERSION = datasets.Version("1.0.0")
72
+
73
+ # Original tagset has 110 + tags and includes fine-grained annotations,
74
+ # e.g., distinguish morphemes vs non-moprhemes. For practical purposes
75
+ # (usability, comparing across datasets), Lee 2020 mapped HKCanCor tags
76
+ # to the Universal Dependencies 2.0 scheme. The following is adapted from:
77
+ # https://github.com/jacksonllee/pycantonese/blob/master/pycantonese/pos_tagging/hkcancor_to_ud.py
78
+
79
+ pos_map = {
80
+ "!": "PUNCT",
81
+ '"': "PUNCT",
82
+ "#": "X",
83
+ "'": "PUNCT",
84
+ ",": "PUNCT",
85
+ "-": "PUNCT",
86
+ ".": "PUNCT",
87
+ "...": "PUNCT",
88
+ "?": "PUNCT",
89
+ "A": "ADJ", # HKCanCor: Adjective
90
+ "AD": "ADV", # HKCanCor: Adjective as Adverbial
91
+ "AG": "ADJ", # HKCanCor: Adjective Morpheme
92
+ "AIRWAYS0": "PROPN",
93
+ "AN": "NOUN", # HKCanCor: Adjective with Nominal Function
94
+ "AND": "PROPN", # In one instance of "Chilli and Pepper"
95
+ "B": "ADJ", # HKCanCor: Non-predicate Adjective
96
+ "BG": "ADJ", # HKCanCor: Non-predicate Adjective Morpheme
97
+ "BEAN0": "PROPN", # In one instance of "Mr Bean"
98
+ "C": "CCONJ", # HKCanCor: Conjunction
99
+ "CENTRE0": "NOUN", # In one instance of "career centre"
100
+ "CG": "CCONJ",
101
+ "D": "ADV", # HKCanCor: Adverb
102
+ "D1": "ADV", # Most instances are gwai2 "ghost".
103
+ "DG": "ADV", # HKCanCor: Adverb Morpheme
104
+ "E": "INTJ", # HKCanCor: Interjection
105
+ "ECHO0": "PROPN", # In one instance of "Big Echo"
106
+ "F": "ADV", # HKCanCor: Directional Locality
107
+ "G": "X", # HKCanCor: Morpheme
108
+ "G1": "V", # The first A in the "A-not-AB" pattern, where AB is a verb.
109
+ "G2": "ADJ", # The first A in "A-not-AB", where AB is an adjective.
110
+ "H": "PROPN", # HKCanCor: Prefix (aa3 阿 followed by a person name)
111
+ "HILL0": "PROPN", # In "Benny Hill"
112
+ "I": "X", # HKCanCor: Idiom
113
+ "IG": "X",
114
+ "J": "NOUN", # HKCanCor: Abbreviation
115
+ "JB": "ADJ",
116
+ "JM": "NOUN",
117
+ "JN": "NOUN",
118
+ "JNS": "PROPN",
119
+ "JNT": "PROPN",
120
+ "JNZ": "PROPN",
121
+ "K": "X", # HKCanCor: Suffix (sing3 性 for nouns; dei6 地 for adverbs)
122
+ "KONG": "PROPN", # In "Hong Kong"
123
+ "L": "X", # Fixed Expression
124
+ "L1": "X",
125
+ "LG": "X",
126
+ "M": "NUM", # HKCanCor: Numeral
127
+ "MG": "X",
128
+ "MONTY0": "PROPN", # In "Full Monty"
129
+ "MOUNTAIN0": "PROPN", # In "Blue Mountain"
130
+ "N": "NOUN", # Common Noun
131
+ "N1": "DET", # HKCanCor: only used for ne1 呢; determiner
132
+ "NG": "NOUN",
133
+ "NR": "PROPN", # HKCanCor: Personal Name
134
+ "NS": "PROPN", # HKCanCor: Place Name
135
+ "NSG": "PROPN",
136
+ "NT": "PROPN", # HKCanCor: Organization Name
137
+ "NX": "NOUN", # HKCanCor: Nominal Character String
138
+ "NZ": "PROPN", # HKCanCor: Other Proper Noun
139
+ "O": "X", # HKCanCor: Onomatopoeia
140
+ "P": "ADP", # HKCanCor: Preposition
141
+ "PEPPER0": "PROPN", # In "Chilli and Pepper"
142
+ "Q": "NOUN", # HKCanCor: Classifier
143
+ "QG": "NOUN", # HKCanCor: Classifier Morpheme
144
+ "R": "PRON", # HKCanCor: Pronoun
145
+ "RG": "PRON", # HKCanCor: Pronoun Morpheme
146
+ "S": "NOUN", # HKCanCor: Space Word
147
+ "SOUND0": "PROPN", # In "Manchester's Sound"
148
+ "T": "ADV", # HKCanCor: Time Word
149
+ "TELECOM0": "PROPN", # In "Hong Kong Telecom"
150
+ "TG": "ADV", # HKCanCor: Time Word Morpheme
151
+ "TOUCH0": "PROPN", # In "Don't Touch" (a magazine)
152
+ "U": "PART", # HKCanCor: Auxiliary (e.g., ge3 嘅 after an attributive adj)
153
+ "UG": "PART", # HKCanCor: Auxiliary Morpheme
154
+ "U0": "PROPN", # U as in "Hong Kong U" (= The University of Hong Kong)
155
+ "V": "VERB", # HKCanCor: Verb
156
+ "V1": "VERB",
157
+ "VD": "ADV", # HKCanCor: Verb as Adverbial
158
+ "VG": "VERB",
159
+ "VK": "VERB",
160
+ "VN": "NOUN", # HKCanCor: Verb with Nominal Function
161
+ "VU": "AUX",
162
+ "VUG": "AUX",
163
+ "W": "PUNCT", # HKCanCor: Punctuation
164
+ "X": "X", # HKCanCor: Unclassified Item
165
+ "XA": "ADJ",
166
+ "XB": "ADJ",
167
+ "XC": "CCONJ",
168
+ "XD": "ADV",
169
+ "XE": "INTJ",
170
+ "XJ": "X",
171
+ "XJB": "PROPN",
172
+ "XJN": "NOUN",
173
+ "XJNT": "PROPN",
174
+ "XJNZ": "PROPN",
175
+ "XJV": "VERB",
176
+ "XJA": "X",
177
+ "XL1": "INTJ",
178
+ "XM": "NUM",
179
+ "XN": "NOUN",
180
+ "XNG": "NOUN",
181
+ "XNR": "PROPN",
182
+ "XNS": "PROPN",
183
+ "XNT": "PROPN",
184
+ "XNX": "NOUN",
185
+ "XNZ": "PROPN",
186
+ "XO": "X",
187
+ "XP": "ADP",
188
+ "XQ": "NOUN",
189
+ "XR": "PRON",
190
+ "XS": "PROPN",
191
+ "XT": "NOUN",
192
+ "XV": "VERB",
193
+ "XVG": "VERB",
194
+ "XVN": "NOUN",
195
+ "XX": "X",
196
+ "Y": "PART", # HKCanCor: Modal Particle
197
+ "YG": "PART", # HKCanCor: Modal Particle Morpheme
198
+ "Y1": "PART",
199
+ "Z": "ADJ", # HKCanCor: Descriptive
200
+ }
201
+
202
+ def _info(self):
203
+
204
+ pos_tags_prf = datasets.Sequence(datasets.features.ClassLabel(names=[tag for tag in self.pos_map.keys()]))
205
+
206
+ pos_tags_ud = datasets.Sequence(
207
+ datasets.features.ClassLabel(names=[tag for tag in set(self.pos_map.values())])
208
+ )
209
+
210
+ features = datasets.Features(
211
+ {
212
+ "conversation_id": datasets.Value("string"),
213
+ "speaker": datasets.Value("string"),
214
+ "turn_number": datasets.Value("int16"),
215
+ "tokens": datasets.Sequence(datasets.Value("string")),
216
+ "transcriptions": datasets.Sequence(datasets.Value("string")),
217
+ "pos_tags_prf": pos_tags_prf,
218
+ "pos_tags_ud": pos_tags_ud,
219
+ }
220
+ )
221
+
222
+ return datasets.DatasetInfo(
223
+ description=_DESCRIPTION,
224
+ features=features,
225
+ supervised_keys=None,
226
+ homepage=_HOMEPAGE,
227
+ license=_LICENSE,
228
+ citation=_CITATION,
229
+ )
230
+
231
+ def _split_generators(self, dl_manager):
232
+ """Returns SplitGenerators."""
233
+ data_dir = os.path.join(dl_manager.download_and_extract(_URL), "utf8")
234
+
235
+ return [
236
+ datasets.SplitGenerator(
237
+ name=datasets.Split.TRAIN,
238
+ gen_kwargs={
239
+ "data_dir": data_dir,
240
+ "split": "train",
241
+ },
242
+ )
243
+ ]
244
+
245
+ def _generate_examples(self, data_dir, split):
246
+ """ Yields examples. """
247
+
248
+ downloaded_files = [os.path.join(data_dir, fn) for fn in sorted(os.listdir(data_dir))]
249
+ for filepath in downloaded_files:
250
+ # Each file in the corpus contains one conversation
251
+ with open(filepath, encoding="utf-8") as f:
252
+ xml = f.read()
253
+ # Add dummy root node to form valid tree
254
+ xml = "<root>" + xml + "</root>"
255
+ tree = ET.fromstring(xml)
256
+
257
+ # Extract dialogue metadata
258
+ info = [line.strip() for line in tree.find("info").text.split("\n") if line and not line.endswith("END")]
259
+ tape_number = "".join(info[0].split("-")[1:])
260
+ date_recorded = "".join(info[1].split("-")[1:])
261
+
262
+ turn_number = -1
263
+ for sent in tree.findall("sent"):
264
+ for child in sent.iter():
265
+ if child.tag == "sent_head":
266
+ current_speaker = child.text.strip()[:-1]
267
+ turn_number += 1
268
+ elif child.tag == "sent_tag":
269
+ tokens = []
270
+ pos_prf = []
271
+ pos_ud = []
272
+ transcriptions = []
273
+ current_sentence = [w.strip() for w in child.text.split("\n") if w and not w.isspace()]
274
+ for w in current_sentence:
275
+ token_data = w.split("/")
276
+ tokens.append(token_data[0])
277
+ transcriptions.append(token_data[2])
278
+
279
+ prf_tag = token_data[1].upper()
280
+ ud_tag = self.pos_map.get(prf_tag, "X")
281
+ pos_prf.append(prf_tag)
282
+ pos_ud.append(ud_tag)
283
+
284
+ num_tokens = len(tokens)
285
+ num_pos_tags = len(pos_prf)
286
+ num_transcriptions = len(transcriptions)
287
+
288
+ assert len(tokens) == len(
289
+ pos_prf
290
+ ), "Sizes do not match: {nw} vs {np} for tokens vs pos-tags in {fp}".format(
291
+ nw=num_tokens, np=num_pos_tags, fp=filepath
292
+ )
293
+ assert len(pos_prf) == len(
294
+ transcriptions
295
+ ), "Sizes do not match: {np} vs {nt} for tokens vs pos-tags in {fp}".format(
296
+ np=num_pos_tags, nt=num_transcriptions, fp=filepath
297
+ )
298
+
299
+ # Corpus doesn't come with conversation-level ids, and
300
+ # multiple texts can correspond to the same tape number,
301
+ # date, and speakers.
302
+ # The following workaround prepends metadata with the
303
+ # first few transcriptions in the conversation
304
+ # to create an identifier.
305
+ id_from_transcriptions = "".join(transcriptions[:5])[:5].upper()
306
+ id_ = "{tn}-{rd}-{it}".format(tn=tape_number, rd=date_recorded, it=id_from_transcriptions)
307
+ yield id_, {
308
+ "conversation_id": id_,
309
+ "speaker": current_speaker,
310
+ "turn_number": turn_number,
311
+ "tokens": tokens,
312
+ "transcriptions": transcriptions,
313
+ "pos_tags_prf": pos_prf,
314
+ "pos_tags_ud": pos_ud,
315
+ }