The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ParserError
Message:      Error tokenizing data. C error: Expected 1 fields in line 17, saw 3

Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows_from_streaming.py", line 132, in compute_first_rows_response
                  iterable_dataset = iterable_dataset._resolve_features()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2211, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1235, in _head
                  return _examples_to_batch(list(self.take(n)))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1384, in __iter__
                  for key, example in ex_iterable:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1040, in __iter__
                  yield from islice(self.ex_iterable, self.n)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 282, in __iter__
                  for key, pa_table in self.generate_tables_fn(**self.kwargs):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/csv/csv.py", line 187, in _generate_tables
                  for batch_idx, df in enumerate(csv_file_reader):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1624, in __next__
                  return self.get_chunk()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1733, in get_chunk
                  return self.read(nrows=size)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1704, in read
                  ) = self._engine.read(  # type: ignore[attr-defined]
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/c_parser_wrapper.py", line 234, in read
                  chunks = self._reader.read_low_memory(nrows)
                File "pandas/_libs/parsers.pyx", line 826, in pandas._libs.parsers.TextReader.read_low_memory
                File "pandas/_libs/parsers.pyx", line 875, in pandas._libs.parsers.TextReader._read_rows
                File "pandas/_libs/parsers.pyx", line 850, in pandas._libs.parsers.TextReader._tokenize_rows
                File "pandas/_libs/parsers.pyx", line 861, in pandas._libs.parsers.TextReader._check_tokenize_status
                File "pandas/_libs/parsers.pyx", line 2029, in pandas._libs.parsers.raise_parser_error
              pandas.errors.ParserError: Error tokenizing data. C error: Expected 1 fields in line 17, saw 3

Need help to make the dataset viewer work? Open a discussion for direct support.

First sentiment analysis (with two classes) dataset for Tigrinya language. This train set is constructed automatically while the test set labeled manually.

  1. Negative sentiment labeled as 0.
  2. Positive sentiment labeled as 1.

train size: 49374 (25031 negative and 24343 positive) test size: 4000 (2000 negative and 2000 positive)

For more information on our experiments and results please check our paper:

Title: Transferring Monolingual Model to Low-Resource Language: The Case of Tigrinya Link: https://arxiv.org/pdf/2006.07698.pdf Authors: Abrhalei Frezghi Tela, Abraham Woubie,Ville Hautamaki

Please consider citing the paper.

@misc{tela2020transferring, title={Transferring Monolingual Model to Low-Resource Language: The Case of Tigrinya}, author={Abrhalei Tela and Abraham Woubie and Ville Hautamaki}, year={2020}, eprint={2006.07698}, archivePrefix={arXiv}, primaryClass={cs.CL} }

Downloads last month
0
Edit dataset card