Datasets:

ArXiv:
License:
ErykWdowiak's picture
Update README.md
15fe231 verified
metadata
license: odc-by
language:
  - en
  - scn
task_categories:
  - translation
pretty_name: Good Sicilian in the NLLB
size_categories:
  - 100K<n<1M

Good Sicilian in the NLLB

"Language models are few shot learners" (Brown et al. 2020). And after drinking a few shots, Google Translate now slurs its speech and garbles a very strange version of Sicilian, one that does not appear in the NLLB dataset or anywhere in the Sicilian literary tradition. The bartender who served those few shots to Google Translate is not a professional translator. They have never even translated their own website into Sicilian.

Waking up the next morning, we all have a headache, so in lieu of aspirin, Project Napizia supplies this "Good Sicilian" data package to the NLP community. We hope it will help language models learn "Good Sicilian."

What is "Good Sicilian"?

Arba Sicula has been translating Sicilian poetry and prose into English since 1979. They have translated so much Sicilian language text that Project Napizia trained a neural machine Sicilian translation model with their bilingual journal (Wdowiak 2021 and Wdowiak 2022). In addition to the journal, Arba Sicula also publishes books on Sicilian language, literature, culture and history. And they organize poetry recitals, concerts, cultural events and an annual tour of Sicily.

"Good Sicilian" presents an 800-year literary tradition. "Good Sicilian" is the literary language described in the three grammar textbooks that Arba Sicula has published.

The NLLB team's search for "Good Sicilian"

"Good Sicilian" is what Facebook sought to collect during the No Language Left Behind project (2022). Project Napizia wishes that the NLLB team had contacted Arba Sicula. Instead, the NLLB team consulted people without any experience translating the Sicilian language. As the NLLB team explains on page 23 of their paper, Sicilian was one of "the more difficult languages" that they worked with. The bartender served them seed data and validation data with "lower levels of industry-wide standardization."

In particular, the seed data reflected a radical new orthographic proposal that first appeared in 2017, while the lion's share of Sicilian text was written prior to 2017. The dissimilarity between seed data and available data caused the NLLB project to collect poor-quality Sicilian language data.

And because the validation data also reflects the radical new orthographic proposal, the dissimilarity of the validation data is not very helpful when evaluating a model trained on the NLLB data (or any Sicilian language data).

The "Good Sicilian" in the NLLB dataset

The purpose of this data package is to identify "Good Sicilian" translations in the NLLB dataset.

Upon visual inspection of the original collection, someone acquainted with the Sicilian language will immediately notice a "rhapsody of dialects." The surprise occurs because some of the good translations are not "Good Sicilian." In those cases, the Sicilian reflects a regional or local pronunciation -- what Sicilians and Italians call "dialect." Those sentences come from the Sicilian folklore tradition. It's "good Sicilian folklore," but for language modelling, we need "good Sicilian language." Fortunately, most of the NLLB data reflects the Sicilian literary tradition -- what people call "language."

The purpose of this data package is to identify the good translations that are "Good Sicilian," so that the NLP community can train better language models for the Sicilian language. For that purpose, Project Napizia used one of its translation models to score the pairs on the task of English-to-Sicilian translation and sorted the pairs by score.

Like golf, a low score is a better score. Napizia's scores come from Sockeye's scorer, which presents the negative log probability that the target subword sequence is a translation of the source subword sequence. So a score close to zero implies a probability close to one. A low score is a better score.

Napizia plays golf. Facebook plays basketball. Facebook's score measures similarity between sentences. At Facebook, a high score is a better score. We present both Facebook's score and Napizia's score. And we apologize in advance for the inevitable confusion.

Finally, for a convenient way to examine the best pairs, we provide a tab-separated CSV spreadsheet of the 50,000 pairs with the best Napizia score.

We hope researchers and practitioners will use this rescored NLLB data will help language models learn "Good Sicilian." We'll update this project with more public collections of "Good Sicilian."

And along with "Good Sicilian," we'll serve the NLP community a giant plate full of cannoli too! ;-)

Dataset Card -- scored English-Sicilian from NLLB-200vo

Dataset Summary

This dataset is a subset created from metadata for mined bitext released by Meta AI. The original contains bitext for 148 English-centric and 1465 non-English-centric language pairs using the stopes mining library and the LASER3 encoders (Heffernan et al, 2022).

Subsequently, Allen AI prepared bilingual collections for Hugging Face and for OPUS. The dataset presented here contains 1,057,469 pairs from the OPUS collection scored by Napizia on the task of English-to-Sicilian translation.

Licensing Information

The dataset is released under the terms of ODC-BY. By using this, you are also bound to the respective Terms of Use and License of the original source.

Sources

A. Fan et al (2020). "Beyond English-Centric Multilingual Machine Translation."

K. Hefferman et al (2022). "Bitext Mining Using Distilled Sentence Representations for Low-Resource Languages."

NLLB Team et al (2022). "No Language Left Behind: Scaling Human-Centered Machine Translation."

H. Schwenk et al (2021). "CCMatrix: Mining Billions of High-Quality Parallel Sentences on the Web."

J. Tiedemann (2012). "Parallel Data, Tools and Interfaces in OPUS."

E. Wdowiak (2021). "Sicilian Translator: A Recipe for Low-Resource NMT."

E. Wdowiak (2022). "A Recipe for Low-Resource NMT."