The dataset viewer is not available for this dataset.
Cannot get the config names for the dataset.
Error code:   ConfigNamesError
Exception:    ValueError
Message:      
Expected data_files in YAML to be either a string or a list of strings
or a list of dicts with two keys: 'split' and 'path', but got [{'split': 'eng-fra', 'path': 'eng-fra.txt'}]
Examples of data_files in YAML:

   data_files: data.csv

   data_files: data/*.png

   data_files:
    - part0/*
    - part1/*

   data_files:
    - split: train
      path: train/*
    - split: test
      path: test/*

   data_files:
    - split: train
      path:
      - train/part1/*
      - train/part2/*
    - split: test
      path: test/*

PS: some symbols like dashes '-' are not allowed in split names

Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
                  config_names = get_dataset_config_names(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 164, in get_dataset_config_names
                  dataset_module = dataset_module_factory(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1729, in dataset_module_factory
                  raise e1 from None
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1686, in dataset_module_factory
                  return HubDatasetModuleFactoryWithoutScript(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1037, in get_module
                  metadata_configs = MetadataConfigs.from_dataset_card_data(dataset_card_data)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/metadata.py", line 153, in from_dataset_card_data
                  cls._raise_if_data_files_field_not_valid(metadata_config)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/metadata.py", line 100, in _raise_if_data_files_field_not_valid
                  raise ValueError(yaml_error_message)
              ValueError: 
              Expected data_files in YAML to be either a string or a list of strings
              or a list of dicts with two keys: 'split' and 'path', but got [{'split': 'eng-fra', 'path': 'eng-fra.txt'}]
              Examples of data_files in YAML:
              
                 data_files: data.csv
              
                 data_files: data/*.png
              
                 data_files:
                  - part0/*
                  - part1/*
              
                 data_files:
                  - split: train
                    path: train/*
                  - split: test
                    path: test/*
              
                 data_files:
                  - split: train
                    path:
                    - train/part1/*
                    - train/part2/*
                  - split: test
                    path: test/*
              
              PS: some symbols like dashes '-' are not allowed in split names

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Dataset origin: https://zenodo.org/records/4012218

UFAL Parallel Corpus of North Levantine 1.0

March 10, 2023

Authors

Shadi Saleh <saleh@ufal.mff.cuni.cz> Hashem Sellat <sellat@ufal.mff.cuni.cz> Mateusz Krubiński <krubinski@ufal.mff.cuni.cz> Adam Posppíšil <adam.pospisil@ff.cuni.cz> Petr Zemánek <petr.zemanek@ff.cuni.cz> Pavel Pecina <pecina@ufal.mff.cuni.cz>

Overview

This is the first release of the UFAL Parallel Corpus of North Levantine, compiled by the Institute of Formal and Applied Linguistics (ÚFAL) at Charles University within the Welcome project (https://welcome-h2020.eu/). The corpus consists of 120,600 multiparallel sentences in English, French, German, Greek, Spanish, and Standard Arabic selected from the OpenSubtitles2018 corpus [1] and manually translated into the North Levantine Arabic language. The corpus was created for the purpose of training machine translation for North Levantine and the other languages.

Data processing

In OpenSubtitles2018, we identified 3,661,627 sentences in English that were aligned with their translations in all of the following languages: arb, fra, deu, ell, spa, and filtered out those that matched any of the following conditions:

  • presence of non-standard characters in the English side (only English alphabet, numbers and the following characters allowed: .!?,:; '$%£€) to reduce noise
  • non-capital first letter in the English side (to avoid incomplete sentences)
  • presence of less than two infrequent words (to increase lexical richness)
  • presence of vulgar words in the English side

Then, we removed exact and near duplicates (detected in the English side) and sampled a subset of approximately 1 million words in the English side. This resulted in 120,771 multiparallel sentences with an average length of 8.28 words per sentence in the English side.

The sentences in Standard Arabic were then manually translated to North Levantine Arabic by native speakers. A few erroneous translations were automatically detected (e.g. empty or unfinished translations) and discarded. The remaining translations were aligned with the other languages through Standard Arabic and English. The final corpus comprises 120,600 sentences in English, Spanish, Greek, German, French, Standard Arabic, and the newly added North Levantine Arabic. The table below shows some overall statistics. The languages of the data files are denoted by their ISO 639-3 codes.

language ISO 639-3 code #words
North Levantine Arabic apc 738,812
Standard Arabic arb 802,313
German deu 940,234
Greek ell 869,543
English eng 999,193
French fra 956,208
Spanish spa 920,922

The translations are provided in seven files, each file contains data in one language. The files aligned through the line numbers; the order of lines is random. We provide linking of the English-centred sentence pairs to the original data in OpenSubtitles2018. This information is stored in the *.ids files that are aligned through the line numbers with the corresponding translations. Each line contains tab-separated items: the source filename, the target filename, space-separated positions of the source sentence in the source file, space-separated positions of the target sentence in the target file.

References

[1] Pierre Lison, Jörg Tiedemann, and Milen Kouylekov. 2018. OpenSubtitles2018: Statistical Rescoring of Sentence Alignments in Large, Noisy Parallel Corpora. Proceedings of the Eleventh International Conference on Language Resources and Evaluation, pages 1742–1748. Miyazaki, Japan.

Acknowledgement

The work was supported by the European Commission via the H2020 Program, project WELCOME, grant agreement: 870930.

Citation

 @misc{11234/1-5033,
 title = {{UFAL} Parallel Corpus of North Levantine 1.0},
 author = {Sellat, Hashem and Saleh, Shadi and Krubi{\'n}ski, Mateusz and Posp{\'{\i}}{\v s}il, Adam and Zem{\'a}nek, Petr and Pecina, Pavel},
 url = {http://hdl.handle.net/11234/1-5033},
 note = {{LINDAT}/{CLARIAH}-{CZ} digital library at the Institute of Formal and Applied Linguistics ({{\'U}FAL}), Faculty of Mathematics and Physics, Charles University},
 copyright = {Creative Commons - Attribution-{NonCommercial}-{ShareAlike} 4.0 International ({CC} {BY}-{NC}-{SA} 4.0)},
 year = {2023} }```
Downloads last month
0