Datasets:

Languages:
Yue Chinese
Multilinguality:
monolingual
Size Categories:
10K<n<100K
Language Creators:
found
Annotations Creators:
expert-generated
Source Datasets:
original
Tags:
License:
hkcancor / README.md
albertvillanova's picture
Replace YAML keys from int to str (#1)
652d6f4
metadata
annotations_creators:
  - expert-generated
language_creators:
  - found
language:
  - yue
license:
  - cc-by-4.0
multilinguality:
  - monolingual
size_categories:
  - 10K<n<100K
source_datasets:
  - original
task_categories:
  - translation
  - text-generation
  - fill-mask
task_ids:
  - dialogue-modeling
paperswithcode_id: hong-kong-cantonese-corpus
pretty_name: The Hong Kong Cantonese Corpus (HKCanCor)
dataset_info:
  features:
    - name: conversation_id
      dtype: string
    - name: speaker
      dtype: string
    - name: turn_number
      dtype: int16
    - name: tokens
      sequence: string
    - name: transcriptions
      sequence: string
    - name: pos_tags_prf
      sequence:
        class_label:
          names:
            '0': '!'
            '1': '"'
            '2': '#'
            '3': ''''
            '4': ','
            '5': '-'
            '6': .
            '7': ...
            '8': '?'
            '9': A
            '10': AD
            '11': AG
            '12': AIRWAYS0
            '13': AN
            '14': AND
            '15': B
            '16': BG
            '17': BEAN0
            '18': C
            '19': CENTRE0
            '20': CG
            '21': D
            '22': D1
            '23': DG
            '24': E
            '25': ECHO0
            '26': F
            '27': G
            '28': G1
            '29': G2
            '30': H
            '31': HILL0
            '32': I
            '33': IG
            '34': J
            '35': JB
            '36': JM
            '37': JN
            '38': JNS
            '39': JNT
            '40': JNZ
            '41': K
            '42': KONG
            '43': L
            '44': L1
            '45': LG
            '46': M
            '47': MG
            '48': MONTY0
            '49': MOUNTAIN0
            '50': 'N'
            '51': N1
            '52': NG
            '53': NR
            '54': NS
            '55': NSG
            '56': NT
            '57': NX
            '58': NZ
            '59': O
            '60': P
            '61': PEPPER0
            '62': Q
            '63': QG
            '64': R
            '65': RG
            '66': S
            '67': SOUND0
            '68': T
            '69': TELECOM0
            '70': TG
            '71': TOUCH0
            '72': U
            '73': UG
            '74': U0
            '75': V
            '76': V1
            '77': VD
            '78': VG
            '79': VK
            '80': VN
            '81': VU
            '82': VUG
            '83': W
            '84': X
            '85': XA
            '86': XB
            '87': XC
            '88': XD
            '89': XE
            '90': XJ
            '91': XJB
            '92': XJN
            '93': XJNT
            '94': XJNZ
            '95': XJV
            '96': XJA
            '97': XL1
            '98': XM
            '99': XN
            '100': XNG
            '101': XNR
            '102': XNS
            '103': XNT
            '104': XNX
            '105': XNZ
            '106': XO
            '107': XP
            '108': XQ
            '109': XR
            '110': XS
            '111': XT
            '112': XV
            '113': XVG
            '114': XVN
            '115': XX
            '116': 'Y'
            '117': YG
            '118': Y1
            '119': Z
    - name: pos_tags_ud
      sequence:
        class_label:
          names:
            '0': DET
            '1': PRON
            '2': VERB
            '3': NOUN
            '4': ADJ
            '5': PUNCT
            '6': INTJ
            '7': ADV
            '8': V
            '9': PART
            '10': X
            '11': NUM
            '12': PROPN
            '13': AUX
            '14': CCONJ
            '15': ADP
  splits:
    - name: train
      num_bytes: 5746381
      num_examples: 10801
  download_size: 961514
  dataset_size: 5746381

Dataset Card for The Hong Kong Cantonese Corpus (HKCanCor)

Table of Contents

Dataset Description

Dataset Summary

The Hong Kong Cantonese Corpus (HKCanCor) comprise transcribed conversations recorded between March 1997 and August 1998. It contains recordings of spontaneous speech (51 texts) and radio programmes (42 texts), which involve 2 to 4 speakers, with 1 text of monologue.

In total, the corpus contains around 230,000 Chinese words. The text is word-segmented (i.e., tokenization is at word-level, and each token can span multiple Chinese characters). Tokens are annotated with part-of-speech (POS) tags and romanised Cantonese pronunciation.

  • Romanisation
    • Follows conventions set by the Linguistic Society of Hong Kong (LSHK).
  • POS
    • The tagset used by this corpus extends the one in the Peita-Fujitsu-Renmin Ribao (PRF) corpus (Duan et al., 2000). Extensions were made to further capture Cantonese-specific phenomena.
    • To facilitate everyday usage and for better comparability across languages and/or corpora, this dataset also includes the tags mapped to the Universal Dependencies 2.0 format. This mapping references the PyCantonese library.

Supported Tasks and Leaderboards

[More Information Needed]

Languages

Yue Chinese / Cantonese (Hong Kong).

Dataset Structure

This corpus has 10801 utterances and approximately 230000 Chinese words. There is no predefined split.

Data Instances

Each instance contains a conversation id, speaker id within that conversation, turn number, part-of-speech tag for each Chinese word in the PRF format and UD2.0 format, and the utterance written in Chinese characters as well as its LSHK format romanisation.

For example:

{
    'conversation_id': 'TNR016-DR070398-HAI6V'
    'pos_tags_prf': ['v', 'w'], 
    'pos_tags_ud': ['VERB', 'PUNCT'],
    'speaker': 'B', 
    'transcriptions': ['hai6', 'VQ1'], 
    'turn_number': 112, 
    'tokens': ['係', '。']
}

Data Fields

  • conversation_id: unique dialogue-level id
  • pos_tags_prf: POS tag using the PRF format at token-level
  • pos_tag_ud: POS tag using the UD2.0 format at token-level
  • speaker: unique speaker id within dialogue
  • transcriptions: token-level romanisation in the LSHK format
  • turn_number: turn number in dialogue
  • tokens: Chinese word or punctuation at token-level

Data Splits

There are no specified splits in this dataset.

Dataset Creation

Curation Rationale

[More Information Needed]

Source Data

Initial Data Collection and Normalization

[More Information Needed]

Who are the source language producers?

[More Information Needed]

Annotations

Annotation process

[More Information Needed]

Who are the annotators?

[More Information Needed]

Personal and Sensitive Information

[More Information Needed]

Considerations for Using the Data

Social Impact of Dataset

[More Information Needed]

Discussion of Biases

[More Information Needed]

Other Known Limitations

[More Information Needed]

Additional Information

Dataset Curators

[More Information Needed]

Licensing Information

This work is licensed under a Creative Commons Attribution 4.0 International License.

Citation Information

This corpus was developed by Luke and Wong, 2015.

@article{luke2015hong,
  author={Luke, Kang-Kwong and Wong, May LY},
  title={The Hong Kong Cantonese corpus: design and uses},
  journal={Journal of Chinese Linguistics},
  year={2015},
  pages={309-330},
  month={12}
}

The POS tagset to Universal Dependency tagset mapping is provided by Jackson Lee, as a part of the PyCantonese library.

@misc{lee2020,
  author = {Lee, Jackson},
  title = {PyCantonese: Cantonese Linguistics and NLP in Python},
  year = {2020},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/jacksonllee/pycantonese}},
  commit = {1d58f44e1cb097faa69de6b617e1d28903b84b98}
}

Contributions

Thanks to @j-chim for adding this dataset.