The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    UnicodeDecodeError
Message:      'utf-8' codec can't decode byte 0x93 in position 0: invalid start byte
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 323, in compute
                  compute_first_rows_from_parquet_response(
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 88, in compute_first_rows_from_parquet_response
                  rows_index = indexer.get_rows_index(
                File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 631, in get_rows_index
                  return RowsIndex(
                File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 512, in __init__
                  self.parquet_index = self._init_parquet_index(
                File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 529, in _init_parquet_index
                  response = get_previous_step_or_raise(
                File "/src/libs/libcommon/src/libcommon/simple_cache.py", line 566, in get_previous_step_or_raise
                  raise CachedArtifactError(
              libcommon.simple_cache.CachedArtifactError: The previous step failed.
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 241, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2216, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1239, in _head
                  return _examples_to_batch(list(self.take(n)))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1389, in __iter__
                  for key, example in ex_iterable:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1044, in __iter__
                  yield from islice(self.ex_iterable, self.n)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 282, in __iter__
                  for key, pa_table in self.generate_tables_fn(**self.kwargs):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/text/text.py", line 90, in _generate_tables
                  batch = f.read(self.config.chunksize)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 1101, in read_with_retries
                  out = read(*args, **kwargs)
                File "/usr/local/lib/python3.9/codecs.py", line 322, in decode
                  (result, consumed) = self._buffer_decode(data, self.errors, final)
              UnicodeDecodeError: 'utf-8' codec can't decode byte 0x93 in position 0: invalid start byte

Need help to make the dataset viewer work? Open a discussion for direct support.

HHD-Ethiopic Dataset

This dataset, named "HHD-Ethiopic," is designed for ethiopic text-image recognition tasks. It contains a collection of historical handwritten Manuscripts in the Ethiopic script. The dataset is intended to facilitate research and development for Ethiopic text-image recognition.

Dataset Details/

  • Size: 79,684

  • Training Set: 57,374

  • Test Set: HHD-Ethiopic consists of two separate Test sets

    • Test Set I (IID): 6,375 images (randomly drawn from the training set)
    • Test Set II (OOD): 15,935 images (specifically from manuscripts dated in the 18th century)
  • Validation Set: 10% of the training set, randomly drawn

  • Number of unique Ethiopic characters :306

  • Dataset Formats:the HHD-Ethiopic dataset is stored in two different formats to accommodate different use cases:

    • Raw Image and Ground-truth Text: consistes of the original images and their corresponding ground-truth text. The dataset is structured as raw images (.png) accompanied by a train CSV file, test-I CSV file, and test-II CSV file that contains the file names of the images and their respective ground-truth text for the training and two test sets respectively.
      -Numpy Format: in this format, both the images and the ground-truth text are stored in a convenient numpy format. The dataset provides pre-processed numpy arrays that can be directly used for training and testing models.
  • Metadata(Human Level Performance ): we have also included metadata regarding the human-level performance predicted by individuals for the test sets. This metadata provides insights into the expected performance-level that humans can achieve in historical Ethiopic text-image recognition tasks.

    • Test Set I - for test set I, a group of 9 individuals was presented with a random subset of the dataset. They were asked to perform Ethiopic text-image recognition and provide their best efforts to transcribe the handwritten texts. The results were collected and stored in a CSV file, Test-I-human_performance included in the dataset.
    • Test Set II - Test set II which was prepared exclusively from Ethiopic historical handwritten documents dated in the 18th century. A different group of 4 individuals was given this subset for evaluation. The human-level performance predictions for this set are also stored in a separate CSV file, Test-II_human_performance Please refer to the respective CSV files for detailed information on the human-level performance predictions. Each CSV file contains the necessary metadata, including the image filenames, groind-truth and the corresponding human-generated transcriptions. If you would like to explore or analyze the human-level performance data further, please refer to the provided CSV files.

Citation

If you use the hhd-ethiopic dataset in your research, please consider citing it:

@misc {author_2023,
  author = { {Birhanu Hailu Belay, Isabelle Guyon, Tadele Mengiste, Bezawork Tilahun, Marcus Liwicki, Tesfa Tegegne, and Romain Egele},
    title  = { HHD-Ethiopic:A Historical Handwritten Dataset for Ethiopic OCR with Baseline Models and Human-level Performance (Revision 50c1e04) },
    year   = 2023,
    url    = { https://huggingface.co/datasets/OCR-Ethiopic/HHD-Ethiopic },
    doi    = { 10.57967/hf/0691 },
    publisher = { Hugging Face }
}

License

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

Downloads last month
0
Edit dataset card