Dataset Preview
Go to dataset viewer
The dataset preview is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    ValueError
Message:      The HTTP server doesn't appear to support range requests. Only reading this file from the beginning is supported. Open with block_size=0 for a streaming file interface.
Traceback:    Traceback (most recent call last):
                File "/src/workers/datasets_based/src/datasets_based/workers/first_rows.py", line 485, in compute_first_rows_response
                  rows = get_rows(
                File "/src/workers/datasets_based/src/datasets_based/workers/first_rows.py", line 120, in decorator
                  return func(*args, **kwargs)
                File "/src/workers/datasets_based/src/datasets_based/workers/first_rows.py", line 176, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 917, in __iter__
                  for key, example in ex_iterable:
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 113, in __iter__
                  yield from self.generate_examples_fn(**self.kwargs)
                File "/tmp/modules-cache/datasets_modules/datasets/siswati_ner_corpus/101d28eafc3b839dd059740f6c55ea78e41164f0a5b951a5077b34cffc6994cf/siswati_ner_corpus.py", line 122, in _generate_examples
                  with open(filepath, encoding="utf-8") as f:
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/streaming.py", line 70, in wrapper
                  return function(*args, use_auth_token=use_auth_token, **kwargs)
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 495, in xopen
                  file_obj = fsspec.open(file, mode=mode, *args, **kwargs).open()
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/core.py", line 419, in open
                  return open_files(
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/core.py", line 272, in open_files
                  fs, fs_token, paths = get_fs_token_paths(
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/core.py", line 586, in get_fs_token_paths
                  fs = filesystem(protocol, **inkwargs)
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/registry.py", line 252, in filesystem
                  return cls(**storage_options)
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/spec.py", line 76, in __call__
                  obj = super().__call__(*args, **kwargs)
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/implementations/zip.py", line 54, in __init__
                  self.zip = zipfile.ZipFile(self.fo, mode=mode)
                File "/usr/local/lib/python3.9/zipfile.py", line 1266, in __init__
                  self._RealGetContents()
                File "/usr/local/lib/python3.9/zipfile.py", line 1329, in _RealGetContents
                  endrec = _EndRecData(fp)
                File "/usr/local/lib/python3.9/zipfile.py", line 273, in _EndRecData
                  data = fpin.read()
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 594, in read
                  return super().read(length)
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/spec.py", line 1684, in read
                  out = self.cache._fetch(self.loc, self.loc + length)
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/caching.py", line 377, in _fetch
                  self.cache = self.fetcher(start, bend)
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/asyn.py", line 114, in wrapper
                  return sync(self.loop, func, *args, **kwargs)
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/asyn.py", line 99, in sync
                  raise return_result
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/asyn.py", line 54, in _runner
                  result[0] = await coro
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 665, in async_fetch_range
                  raise ValueError(
              ValueError: The HTTP server doesn't appear to support range requests. Only reading this file from the beginning is supported. Open with block_size=0 for a streaming file interface.

Need help to make the dataset viewer work? Open an discussion for direct support.

Dataset Card for Siswati NER Corpus

Dataset Summary

The Siswati Ner Corpus is a Siswati dataset developed by The Centre for Text Technology (CTexT), North-West University, South Africa. The data is based on documents from the South African goverment domain and crawled from gov.za websites. It was created to support NER task for Siswati language. The dataset uses CoNLL shared task annotation standards.

Supported Tasks and Leaderboards

[More Information Needed]

Languages

The language supported is Siswati.

Dataset Structure

Data Instances

A data point consists of sentences seperated by empty line and tab-seperated tokens and tags.

{'id': '0',
 'ner_tags': [0, 0, 0, 0, 0],
 'tokens': ['Tinsita', 'tebantfu', ':', 'tinsita', 'tetakhamiti']
}

Data Fields

  • id: id of the sample
  • tokens: the tokens of the example text
  • ner_tags: the NER tags of each token

The NER tags correspond to this list:

"OUT", "B-PERS", "I-PERS", "B-ORG", "I-ORG", "B-LOC", "I-LOC", "B-MISC", "I-MISC",

The NER tags have the same format as in the CoNLL shared task: a B denotes the first item of a phrase and an I any non-initial word. There are four types of phrases: person names (PER), organizations (ORG), locations (LOC) and miscellaneous names (MISC). (OUT) is used for tokens not considered part of any named entity.

Data Splits

The data was not split.

Dataset Creation

Curation Rationale

The data was created to help introduce resources to new language - siswati.

[More Information Needed]

Source Data

Initial Data Collection and Normalization

The data is based on South African government domain and was crawled from gov.za websites.

Who are the source language producers?

The data was produced by writers of South African government websites - gov.za

[More Information Needed]

Annotations

Annotation process

[More Information Needed]

Who are the annotators?

The data was annotated during the NCHLT text resource development project.

[More Information Needed]

Personal and Sensitive Information

[More Information Needed]

Considerations for Using the Data

Social Impact of Dataset

[More Information Needed]

Discussion of Biases

[More Information Needed]

Other Known Limitations

[More Information Needed]

Additional Information

Dataset Curators

The annotated data sets were developed by the Centre for Text Technology (CTexT, North-West University, South Africa).

See: more information

Licensing Information

The data is under the Creative Commons Attribution 2.5 South Africa License

Citation Information

@inproceedings{siswati_ner_corpus,
  author    = {B.B. Malangwane and
               M.N. Kekana and
               S.S. Sedibe and
               B.C. Ndhlovu and
              Roald Eiselen},
  title     = {NCHLT Siswati Named Entity Annotated Corpus},
  booktitle = {Eiselen, R. 2016. Government domain named entity recognition for South African languages. Proceedings of the 10th      Language Resource and Evaluation Conference, Portorož, Slovenia.},
  year      = {2016},
  url       = {https://repo.sadilar.org/handle/20.500.12185/346},
}

Contributions

Thanks to @yvonnegitau for adding this dataset.

Downloads last month
172