The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ParserError
Message:      Error tokenizing data. C error: Expected 4 fields in line 2380, saw 5

Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows_from_streaming.py", line 132, in compute_first_rows_response
                  iterable_dataset = iterable_dataset._resolve_features()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2211, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1235, in _head
                  return _examples_to_batch(list(self.take(n)))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1384, in __iter__
                  for key, example in ex_iterable:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1040, in __iter__
                  yield from islice(self.ex_iterable, self.n)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 282, in __iter__
                  for key, pa_table in self.generate_tables_fn(**self.kwargs):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/csv/csv.py", line 187, in _generate_tables
                  for batch_idx, df in enumerate(csv_file_reader):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1624, in __next__
                  return self.get_chunk()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1733, in get_chunk
                  return self.read(nrows=size)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1704, in read
                  ) = self._engine.read(  # type: ignore[attr-defined]
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/c_parser_wrapper.py", line 234, in read
                  chunks = self._reader.read_low_memory(nrows)
                File "pandas/_libs/parsers.pyx", line 826, in pandas._libs.parsers.TextReader.read_low_memory
                File "pandas/_libs/parsers.pyx", line 875, in pandas._libs.parsers.TextReader._read_rows
                File "pandas/_libs/parsers.pyx", line 850, in pandas._libs.parsers.TextReader._tokenize_rows
                File "pandas/_libs/parsers.pyx", line 861, in pandas._libs.parsers.TextReader._check_tokenize_status
                File "pandas/_libs/parsers.pyx", line 2029, in pandas._libs.parsers.raise_parser_error
              pandas.errors.ParserError: Error tokenizing data. C error: Expected 4 fields in line 2380, saw 5

Need help to make the dataset viewer work? Open a discussion for direct support.

ComfyOpenSubtitles

Dataset Description

ComfyOpenSubtitles is a multilingual dataset that contains parallel translations of subtitles from various languages. It includes pairs of input and target languages, along with the corresponding subtitles.

Languages

The dataset supports the following languages:

  • English (en)
  • Russian (ru)
  • French (fr)
  • Spanish (es)
  • Arabic (ar)
  • Simplified Chinese (zh-cn)
  • Korean (ko)
  • Japanese (ja)
  • German (de)

Dataset Structure

Data Instances

Here are some examples of data instances:

  • Input Language: English Target Language: Russian Input Text: "Oh, bud... what have you done?" Output Text: "Эх, Кореш... Что ж вы наделали?"

  • Input Language: Spanish Target Language: French Input Text: "This is a beautiful sunset." Output Text: "C'est un magnifique coucher de soleil."

Data Fields

The dataset includes the following fields for each instance:

  • input_language: The language of the input text.
  • target_language: The language of the target translation.
  • input_text: The input text in the source language.
  • output_text: The corresponding translation in the target language.

Data Splits

The dataset is typically divided into training splits with varying sizes.

Dataset Creation

Curation Rationale

The dataset was created to provide a multilingual collection of subtitles and their translations for research and natural language processing tasks.

Source Data

The source data for this dataset consists of subtitles from various movies and TV shows.

Personal and Sensitive Information

The dataset may contain text from movies and TV shows, which may include personal or sensitive information related to the content of those shows.

Other Known Limitations

Some data may be inaccurate. Be careful.

Acknowledgments

Downloads last month
0
Edit dataset card