Dataset Viewer issue

#1
by 0x22almostEvil - opened

The dataset viewer is not working.

en ru "You spend much of season two wondering, "why isn't she turning?"" "Это был вариант "а". Почти весь второй сезон ты думаешь:"

Even if there are "а" at row 4, it's tsv, so it shouldn't be a problem.
Error details:

Error code:   FeaturesError
Exception:    ParserError
Message:      Error tokenizing data. C error: Expected 4 fields in line 2380, saw 5

Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows_from_streaming.py", line 176, in compute_first_rows_response
                  iterable_dataset = iterable_dataset._resolve_features()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2206, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1230, in _head
                  return _examples_to_batch(list(self.take(n)))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1379, in __iter__
                  for key, example in ex_iterable:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1039, in __iter__
                  yield from islice(self.ex_iterable, self.n)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 281, in __iter__
                  for key, pa_table in self.generate_tables_fn(**self.kwargs):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/csv/csv.py", line 187, in _generate_tables
                  for batch_idx, df in enumerate(csv_file_reader):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1624, in __next__
                  return self.get_chunk()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1733, in get_chunk
                  return self.read(nrows=size)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1704, in read
                  ) = self._engine.read(  # type: ignore[attr-defined]
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/c_parser_wrapper.py", line 234, in read
                  chunks = self._reader.read_low_memory(nrows)
                File "pandas/_libs/parsers.pyx", line 826, in pandas._libs.parsers.TextReader.read_low_memory
                File "pandas/_libs/parsers.pyx", line 875, in pandas._libs.parsers.TextReader._read_rows
                File "pandas/_libs/parsers.pyx", line 850, in pandas._libs.parsers.TextReader._tokenize_rows
                File "pandas/_libs/parsers.pyx", line 861, in pandas._libs.parsers.TextReader._check_tokenize_status
                File "pandas/_libs/parsers.pyx", line 2029, in pandas._libs.parsers.raise_parser_error
              pandas.errors.ParserError: Error tokenizing data. C error: Expected 4 fields in line 2380, saw 5

cc @albertvillanova @lhoestq @severo .

We use pandas with tab separator to read TSV files, maybe it requires to pass another parameter to pd.read_csv to handle quotes correctly ?

If so, you'll be able to pass this parameter as YAML to enable the viewer.

Hi @0x22almostEvil , note that your text columns are enclosed by quotes and these are by default removed by any CSV parser:

  • Your CSV file cell text: "Some text"
  • Is transformed by the CSV parser as a string like this: Some text

These quotes are important, because you can have a delimiter character (the tab) between the quotes, and this will be considered as part of the text column, instead of two different columns. However, if you text contains more than the leading and ending quotes and also tabs, then the tab is considered as a separator (not as part of the text), creating an additional column.

To avoid this issue, you should create your CSV file with a proper CSV parser, which will properly escape the quotes contained within your text columns (normally with an additional quote):

en\tru\t"You spend much of season two wondering, ""why isn\'t she turning?"""\t"Это был вариант ""а"". Почти весь второй сезон ты думаешь:"\n

On the other hand, to improve the time other users spend in loading your dataset, please consider compressing your data file (with ZIP for example) before uploading it to the Hub.

Sign up or log in to comment