Dataset Preview
Go to dataset viewer
The dataset preview is not available for this split.
Cannot load the dataset split (in normal download mode) to extract the first rows.
Error code:   NormalRowsError
Exception:    RuntimeError
Message:      Give up after 5 attempts with ConnectionError
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/aiohttp/connector.py", line 986, in _wrap_create_connection
                  return await self._loop.create_connection(*args, **kwargs)  # type: ignore[return-value]  # noqa
                File "/usr/local/lib/python3.9/asyncio/base_events.py", line 1056, in create_connection
                  raise exceptions[0]
                File "/usr/local/lib/python3.9/asyncio/base_events.py", line 1041, in create_connection
                  sock = await self._connect_sock(
                File "/usr/local/lib/python3.9/asyncio/base_events.py", line 955, in _connect_sock
                  await self.sock_connect(sock, address)
                File "/usr/local/lib/python3.9/asyncio/selector_events.py", line 502, in sock_connect
                  return await fut
                File "/usr/local/lib/python3.9/asyncio/selector_events.py", line 537, in _sock_connect_cb
                  raise OSError(err, f'Connect call failed {address}')
              TimeoutError: [Errno 110] Connect call failed ('155.69.255.27', 80)
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 391, in _info
                  await _file_info(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 768, in _file_info
                  r = await session.get(url, allow_redirects=ar, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/aiohttp/client.py", line 535, in _request
                  conn = await self._connector.connect(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/aiohttp/connector.py", line 542, in connect
                  proto = await self._create_connection(req, traces, timeout)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/aiohttp/connector.py", line 907, in _create_connection
                  _, proto = await self._create_direct_connection(req, traces, timeout)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/aiohttp/connector.py", line 1206, in _create_direct_connection
                  raise last_exc
                File "/src/services/worker/.venv/lib/python3.9/site-packages/aiohttp/connector.py", line 1175, in _create_direct_connection
                  transp, proto = await self._wrap_create_connection(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/aiohttp/connector.py", line 992, in _wrap_create_connection
                  raise client_error(req.connection_key, exc) from exc
              aiohttp.client_exceptions.ClientConnectorError: Cannot connect to host compling.hss.ntu.edu.sg:80 ssl:default [Connect call failed ('155.69.255.27', 80)]
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/responses/first_rows.py", line 337, in get_first_rows_response
                  rows = get_rows(dataset, config, split, streaming=True, rows_max_number=rows_max_number, hf_token=hf_token)
                File "/src/services/worker/src/worker/utils.py", line 123, in decorator
                  return func(*args, **kwargs)
                File "/src/services/worker/src/worker/responses/first_rows.py", line 77, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 718, in __iter__
                  for key, example in self._iter():
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 708, in _iter
                  yield from ex_iterable
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 112, in __iter__
                  yield from self.generate_examples_fn(**self.kwargs)
                File "/tmp/modules-cache/datasets_modules/datasets/hkcancor/c4969daf56e7950010a150d189aed236a393a9280110791f52c3cc70348c0bc7/hkcancor.py", line 247, in _generate_examples
                  downloaded_files = [os.path.join(data_dir, fn) for fn in sorted(os.listdir(data_dir))]
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/streaming.py", line 67, in wrapper
                  return function(*args, use_auth_token=use_auth_token, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 489, in xlistdir
                  fs, *_ = fsspec.get_fs_token_paths(path, storage_options=storage_options)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/core.py", line 606, in get_fs_token_paths
                  fs = filesystem(protocol, **inkwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/registry.py", line 268, in filesystem
                  return cls(**storage_options)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/spec.py", line 76, in __call__
                  obj = super().__call__(*args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/zip.py", line 59, in __init__
                  self.fo = fo.__enter__()  # the whole instance is a context
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/core.py", line 103, in __enter__
                  f = self.fs.open(self.path, mode=mode)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/spec.py", line 1034, in open
                  f = self._open(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 340, in _open
                  size = size or self.info(path, **kwargs)["size"]
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/asyn.py", line 111, in wrapper
                  return sync(self.loop, func, *args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/asyn.py", line 96, in sync
                  raise return_result
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/asyn.py", line 53, in _runner
                  result[0] = await coro
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 404, in _info
                  raise FileNotFoundError(url) from exc
              FileNotFoundError: http://compling.hss.ntu.edu.sg/hkcancor/data/hkcancor-utf8.zip
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/utils.py", line 123, in decorator
                  return func(*args, **kwargs)
                File "/src/services/worker/src/worker/responses/first_rows.py", line 65, in get_rows
                  ds = load_dataset(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1746, in load_dataset
                  builder_instance.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 704, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1227, in _download_and_prepare
                  super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 771, in _download_and_prepare
                  split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
                File "/tmp/modules-cache/datasets_modules/datasets/hkcancor/c4969daf56e7950010a150d189aed236a393a9280110791f52c3cc70348c0bc7/hkcancor.py", line 232, in _split_generators
                  data_dir = os.path.join(dl_manager.download_and_extract(_URL), "utf8")
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/download_manager.py", line 431, in download_and_extract
                  return self.extract(self.download(url_or_urls))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/download_manager.py", line 309, in download
                  downloaded_path_or_paths = map_nested(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 385, in map_nested
                  return function(data_struct)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/download_manager.py", line 335, in _download
                  return cached_path(url_or_filename, download_config=download_config)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 185, in cached_path
                  output_path = get_from_cache(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 533, in get_from_cache
                  raise ConnectionError(f"Couldn't reach {url} ({repr(head_error)})")
              ConnectionError: Couldn't reach http://compling.hss.ntu.edu.sg/hkcancor/data/hkcancor-utf8.zip (ConnectTimeout(MaxRetryError("HTTPConnectionPool(host='compling.hss.ntu.edu.sg', port=80): Max retries exceeded with url: /hkcancor/data/hkcancor-utf8.zip (Caused by ConnectTimeoutError(<urllib3.connection.HTTPConnection object at 0x7f033af0e940>, 'Connection to compling.hss.ntu.edu.sg timed out. (connect timeout=100)'))")))
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/responses/first_rows.py", line 345, in get_first_rows_response
                  rows = get_rows(
                File "/src/services/worker/src/worker/utils.py", line 128, in decorator
                  raise RuntimeError(f"Give up after {attempt} attempts with ConnectionError") from last_err
              RuntimeError: Give up after 5 attempts with ConnectionError

Need help to make the dataset viewer work? Open an discussion for direct support.

Dataset Card for The Hong Kong Cantonese Corpus (HKCanCor)

Dataset Summary

The Hong Kong Cantonese Corpus (HKCanCor) comprise transcribed conversations recorded between March 1997 and August 1998. It contains recordings of spontaneous speech (51 texts) and radio programmes (42 texts), which involve 2 to 4 speakers, with 1 text of monologue.

In total, the corpus contains around 230,000 Chinese words. The text is word-segmented (i.e., tokenization is at word-level, and each token can span multiple Chinese characters). Tokens are annotated with part-of-speech (POS) tags and romanised Cantonese pronunciation.

  • Romanisation
    • Follows conventions set by the Linguistic Society of Hong Kong (LSHK).
  • POS
    • The tagset used by this corpus extends the one in the Peita-Fujitsu-Renmin Ribao (PRF) corpus (Duan et al., 2000). Extensions were made to further capture Cantonese-specific phenomena.
    • To facilitate everyday usage and for better comparability across languages and/or corpora, this dataset also includes the tags mapped to the Universal Dependencies 2.0 format. This mapping references the PyCantonese library.

Supported Tasks and Leaderboards

[More Information Needed]

Languages

Yue Chinese / Cantonese (Hong Kong).

Dataset Structure

This corpus has 10801 utterances and approximately 230000 Chinese words. There is no predefined split.

Data Instances

Each instance contains a conversation id, speaker id within that conversation, turn number, part-of-speech tag for each Chinese word in the PRF format and UD2.0 format, and the utterance written in Chinese characters as well as its LSHK format romanisation.

For example:

{
    'conversation_id': 'TNR016-DR070398-HAI6V'
    'pos_tags_prf': ['v', 'w'], 
    'pos_tags_ud': ['VERB', 'PUNCT'],
    'speaker': 'B', 
    'transcriptions': ['hai6', 'VQ1'], 
    'turn_number': 112, 
    'tokens': ['係', '。']
}

Data Fields

  • conversation_id: unique dialogue-level id
  • pos_tags_prf: POS tag using the PRF format at token-level
  • pos_tag_ud: POS tag using the UD2.0 format at token-level
  • speaker: unique speaker id within dialogue
  • transcriptions: token-level romanisation in the LSHK format
  • turn_number: turn number in dialogue
  • tokens: Chinese word or punctuation at token-level

Data Splits

There are no specified splits in this dataset.

Dataset Creation

Curation Rationale

[More Information Needed]

Source Data

Initial Data Collection and Normalization

[More Information Needed]

Who are the source language producers?

[More Information Needed]

Annotations

Annotation process

[More Information Needed]

Who are the annotators?

[More Information Needed]

Personal and Sensitive Information

[More Information Needed]

Considerations for Using the Data

Social Impact of Dataset

[More Information Needed]

Discussion of Biases

[More Information Needed]

Other Known Limitations

[More Information Needed]

Additional Information

Dataset Curators

[More Information Needed]

Licensing Information

This work is licensed under a Creative Commons Attribution 4.0 International License.

Citation Information

This corpus was developed by Luke and Wong, 2015.

@article{luke2015hong,
  author={Luke, Kang-Kwong and Wong, May LY},
  title={The Hong Kong Cantonese corpus: design and uses},
  journal={Journal of Chinese Linguistics},
  year={2015},
  pages={309-330},
  month={12}
}

The POS tagset to Universal Dependency tagset mapping is provided by Jackson Lee, as a part of the PyCantonese library.

@misc{lee2020,
  author = {Lee, Jackson},
  title = {PyCantonese: Cantonese Linguistics and NLP in Python},
  year = {2020},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/jacksonllee/pycantonese}},
  commit = {1d58f44e1cb097faa69de6b617e1d28903b84b98}
}

Contributions

Thanks to @j-chim for adding this dataset.