File size: 141,871 Bytes
3f799e9
1
{"amh": {"description": "MasakhaNER is the first large publicly available high-quality dataset for named entity recognition (NER) in ten African languages.\n\nNamed entities are phrases that contain the names of persons, organizations, locations, times and quantities.\n\nExample:\n[PER Wolff] , currently a journalist in [LOC Argentina] , played with [PER Del Bosque] in the final years of the seventies in [ORG Real Madrid] .\nMasakhaNER is a named entity dataset consisting of PER, ORG, LOC, and DATE entities annotated by Masakhane for ten African languages:\n- Amharic\n- Hausa\n- Igbo\n- Kinyarwanda\n- Luganda\n- Luo\n- Nigerian-Pidgin\n- Swahili\n- Wolof\n- Yoruba\n\nThe train/validation/test sets are available for all the ten languages.\n\nFor more details see https://arxiv.org/abs/2103.11811\n", "citation": "@article{Adelani2021MasakhaNERNE,\n  title={MasakhaNER: Named Entity Recognition for African Languages},\n  author={D. Adelani and Jade Abbott and Graham Neubig and Daniel D'Souza and Julia Kreutzer and Constantine Lignos\n  and Chester Palen-Michel and Happy Buzaaba and Shruti Rijhwani and Sebastian Ruder and Stephen Mayhew and\n  Israel Abebe Azime and S. Muhammad and Chris C. Emezue and Joyce Nakatumba-Nabende and Perez Ogayo and\n  Anuoluwapo Aremu and Catherine Gitau and Derguene Mbaye and J. Alabi and Seid Muhie Yimam and Tajuddeen R. Gwadabe and\n  Ignatius Ezeani and Rubungo Andre Niyongabo and Jonathan Mukiibi and V. Otiende and Iroro Orife and Davis David and\n  Samba Ngom and Tosin P. Adewumi and Paul Rayson and Mofetoluwa Adeyemi and Gerald Muriuki and Emmanuel Anebi and\n  C. Chukwuneke and N. Odu and Eric Peter Wairagala and S. Oyerinde and Clemencia Siro and Tobius Saul Bateesa and\n  Temilola Oloyede and Yvonne Wambui and Victor Akinode and Deborah Nabagereka and Maurice Katusiime and\n  Ayodele Awokoya and Mouhamadane Mboup and D. Gebreyohannes and Henok Tilaye and Kelechi Nwaike and Degaga Wolde and\n   Abdoulaye Faye and Blessing Sibanda and Orevaoghene Ahia and Bonaventure F. P. Dossou and Kelechi Ogueji and\n   Thierno Ibrahima Diop and A. Diallo and Adewale Akinfaderin and T. Marengereke and Salomey Osei},\n  journal={ArXiv},\n  year={2021},\n  volume={abs/2103.11811}\n}\n", "homepage": "https://arxiv.org/abs/2103.11811", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "ner_tags": {"feature": {"num_classes": 9, "names": ["O", "B-PER", "I-PER", "B-ORG", "I-ORG", "B-LOC", "I-LOC", "B-DATE", "I-DATE"], "names_file": null, "id": null, "_type": "ClassLabel"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "masakhaner", "config_name": "amh", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 639911, "num_examples": 1750, "dataset_name": "masakhaner"}, "validation": {"name": "validation", "num_bytes": 92753, "num_examples": 250, "dataset_name": "masakhaner"}, "test": {"name": "test", "num_bytes": 184271, "num_examples": 500, "dataset_name": "masakhaner"}}, "download_checksums": {"https://github.com/masakhane-io/masakhane-ner/raw/main/data/amh/train.txt": {"num_bytes": 399218, "checksum": "4cd4bb953f2d2a47172f5589133324657c785ed577acc3ebd7d5a74a106b0883"}, "https://github.com/masakhane-io/masakhane-ner/raw/main/data/amh/dev.txt": {"num_bytes": 58077, "checksum": "0ba1bb30f7519c255341a9438fda19e0f61f15aa344e5ba038eb50144147e21d"}, "https://github.com/masakhane-io/masakhane-ner/raw/main/data/amh/test.txt": {"num_bytes": 114656, "checksum": "8957bf06668ea28842acaca3caaf139aac1e078b56bf7aec593c1bb8fff7d938"}}, "download_size": 571951, "post_processing_size": null, "dataset_size": 916935, "size_in_bytes": 1488886}, "hau": {"description": "MasakhaNER is the first large publicly available high-quality dataset for named entity recognition (NER) in ten African languages.\n\nNamed entities are phrases that contain the names of persons, organizations, locations, times and quantities.\n\nExample:\n[PER Wolff] , currently a journalist in [LOC Argentina] , played with [PER Del Bosque] in the final years of the seventies in [ORG Real Madrid] .\nMasakhaNER is a named entity dataset consisting of PER, ORG, LOC, and DATE entities annotated by Masakhane for ten African languages:\n- Amharic\n- Hausa\n- Igbo\n- Kinyarwanda\n- Luganda\n- Luo\n- Nigerian-Pidgin\n- Swahili\n- Wolof\n- Yoruba\n\nThe train/validation/test sets are available for all the ten languages.\n\nFor more details see https://arxiv.org/abs/2103.11811\n", "citation": "@article{Adelani2021MasakhaNERNE,\n  title={MasakhaNER: Named Entity Recognition for African Languages},\n  author={D. Adelani and Jade Abbott and Graham Neubig and Daniel D'Souza and Julia Kreutzer and Constantine Lignos\n  and Chester Palen-Michel and Happy Buzaaba and Shruti Rijhwani and Sebastian Ruder and Stephen Mayhew and\n  Israel Abebe Azime and S. Muhammad and Chris C. Emezue and Joyce Nakatumba-Nabende and Perez Ogayo and\n  Anuoluwapo Aremu and Catherine Gitau and Derguene Mbaye and J. Alabi and Seid Muhie Yimam and Tajuddeen R. Gwadabe and\n  Ignatius Ezeani and Rubungo Andre Niyongabo and Jonathan Mukiibi and V. Otiende and Iroro Orife and Davis David and\n  Samba Ngom and Tosin P. Adewumi and Paul Rayson and Mofetoluwa Adeyemi and Gerald Muriuki and Emmanuel Anebi and\n  C. Chukwuneke and N. Odu and Eric Peter Wairagala and S. Oyerinde and Clemencia Siro and Tobius Saul Bateesa and\n  Temilola Oloyede and Yvonne Wambui and Victor Akinode and Deborah Nabagereka and Maurice Katusiime and\n  Ayodele Awokoya and Mouhamadane Mboup and D. Gebreyohannes and Henok Tilaye and Kelechi Nwaike and Degaga Wolde and\n   Abdoulaye Faye and Blessing Sibanda and Orevaoghene Ahia and Bonaventure F. P. Dossou and Kelechi Ogueji and\n   Thierno Ibrahima Diop and A. Diallo and Adewale Akinfaderin and T. Marengereke and Salomey Osei},\n  journal={ArXiv},\n  year={2021},\n  volume={abs/2103.11811}\n}\n", "homepage": "https://arxiv.org/abs/2103.11811", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "ner_tags": {"feature": {"num_classes": 9, "names": ["O", "B-PER", "I-PER", "B-ORG", "I-ORG", "B-LOC", "I-LOC", "B-DATE", "I-DATE"], "names_file": null, "id": null, "_type": "ClassLabel"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "masakhaner", "config_name": "hau", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 929848, "num_examples": 1912, "dataset_name": "masakhaner"}, "validation": {"name": "validation", "num_bytes": 139503, "num_examples": 276, "dataset_name": "masakhaner"}, "test": {"name": "test", "num_bytes": 282971, "num_examples": 552, "dataset_name": "masakhaner"}}, "download_checksums": {"https://github.com/masakhane-io/masakhane-ner/raw/main/data/hau/train.txt": {"num_bytes": 436199, "checksum": "f09523f77aacc7d89e935ce8fdd0d7d6afb7ba06e4a6040a431f295b0b937939"}, "https://github.com/masakhane-io/masakhane-ner/raw/main/data/hau/dev.txt": {"num_bytes": 65392, "checksum": "a4695528b190c629f417e66feda127650611ee8d0f0d0ce14dee8f1506da0f7f"}, "https://github.com/masakhane-io/masakhane-ner/raw/main/data/hau/test.txt": {"num_bytes": 131781, "checksum": "1f636716cb1633490bc475951721d81a637b033045123f3ee84fee2a1ea06c70"}}, "download_size": 633372, "post_processing_size": null, "dataset_size": 1352322, "size_in_bytes": 1985694}, "ibo": {"description": "MasakhaNER is the first large publicly available high-quality dataset for named entity recognition (NER) in ten African languages.\n\nNamed entities are phrases that contain the names of persons, organizations, locations, times and quantities.\n\nExample:\n[PER Wolff] , currently a journalist in [LOC Argentina] , played with [PER Del Bosque] in the final years of the seventies in [ORG Real Madrid] .\nMasakhaNER is a named entity dataset consisting of PER, ORG, LOC, and DATE entities annotated by Masakhane for ten African languages:\n- Amharic\n- Hausa\n- Igbo\n- Kinyarwanda\n- Luganda\n- Luo\n- Nigerian-Pidgin\n- Swahili\n- Wolof\n- Yoruba\n\nThe train/validation/test sets are available for all the ten languages.\n\nFor more details see https://arxiv.org/abs/2103.11811\n", "citation": "@article{Adelani2021MasakhaNERNE,\n  title={MasakhaNER: Named Entity Recognition for African Languages},\n  author={D. Adelani and Jade Abbott and Graham Neubig and Daniel D'Souza and Julia Kreutzer and Constantine Lignos\n  and Chester Palen-Michel and Happy Buzaaba and Shruti Rijhwani and Sebastian Ruder and Stephen Mayhew and\n  Israel Abebe Azime and S. Muhammad and Chris C. Emezue and Joyce Nakatumba-Nabende and Perez Ogayo and\n  Anuoluwapo Aremu and Catherine Gitau and Derguene Mbaye and J. Alabi and Seid Muhie Yimam and Tajuddeen R. Gwadabe and\n  Ignatius Ezeani and Rubungo Andre Niyongabo and Jonathan Mukiibi and V. Otiende and Iroro Orife and Davis David and\n  Samba Ngom and Tosin P. Adewumi and Paul Rayson and Mofetoluwa Adeyemi and Gerald Muriuki and Emmanuel Anebi and\n  C. Chukwuneke and N. Odu and Eric Peter Wairagala and S. Oyerinde and Clemencia Siro and Tobius Saul Bateesa and\n  Temilola Oloyede and Yvonne Wambui and Victor Akinode and Deborah Nabagereka and Maurice Katusiime and\n  Ayodele Awokoya and Mouhamadane Mboup and D. Gebreyohannes and Henok Tilaye and Kelechi Nwaike and Degaga Wolde and\n   Abdoulaye Faye and Blessing Sibanda and Orevaoghene Ahia and Bonaventure F. P. Dossou and Kelechi Ogueji and\n   Thierno Ibrahima Diop and A. Diallo and Adewale Akinfaderin and T. Marengereke and Salomey Osei},\n  journal={ArXiv},\n  year={2021},\n  volume={abs/2103.11811}\n}\n", "homepage": "https://arxiv.org/abs/2103.11811", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "ner_tags": {"feature": {"num_classes": 9, "names": ["O", "B-PER", "I-PER", "B-ORG", "I-ORG", "B-LOC", "I-LOC", "B-DATE", "I-DATE"], "names_file": null, "id": null, "_type": "ClassLabel"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "masakhaner", "config_name": "ibo", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 749196, "num_examples": 2235, "dataset_name": "masakhaner"}, "validation": {"name": "validation", "num_bytes": 110572, "num_examples": 320, "dataset_name": "masakhaner"}, "test": {"name": "test", "num_bytes": 222192, "num_examples": 638, "dataset_name": "masakhaner"}}, "download_checksums": {"https://github.com/masakhane-io/masakhane-ner/raw/main/data/ibo/train.txt": {"num_bytes": 354547, "checksum": "9731a5268749b43c5f1c7ac1f420aa7b856cbe2a0002b14dd1d2f809257612bb"}, "https://github.com/masakhane-io/masakhane-ner/raw/main/data/ibo/dev.txt": {"num_bytes": 53684, "checksum": "059e81794392a659efd65218ae988ff4ae680cd5beceae84caa5a8658767e8ee"}, "https://github.com/masakhane-io/masakhane-ner/raw/main/data/ibo/test.txt": {"num_bytes": 107184, "checksum": "ecfc765813ac82b5807e3cf00e26020ec971e5ebd93c71e172c7f48df427d818"}}, "download_size": 515415, "post_processing_size": null, "dataset_size": 1081960, "size_in_bytes": 1597375}, "kin": {"description": "MasakhaNER is the first large publicly available high-quality dataset for named entity recognition (NER) in ten African languages.\n\nNamed entities are phrases that contain the names of persons, organizations, locations, times and quantities.\n\nExample:\n[PER Wolff] , currently a journalist in [LOC Argentina] , played with [PER Del Bosque] in the final years of the seventies in [ORG Real Madrid] .\nMasakhaNER is a named entity dataset consisting of PER, ORG, LOC, and DATE entities annotated by Masakhane for ten African languages:\n- Amharic\n- Hausa\n- Igbo\n- Kinyarwanda\n- Luganda\n- Luo\n- Nigerian-Pidgin\n- Swahili\n- Wolof\n- Yoruba\n\nThe train/validation/test sets are available for all the ten languages.\n\nFor more details see https://arxiv.org/abs/2103.11811\n", "citation": "@article{Adelani2021MasakhaNERNE,\n  title={MasakhaNER: Named Entity Recognition for African Languages},\n  author={D. Adelani and Jade Abbott and Graham Neubig and Daniel D'Souza and Julia Kreutzer and Constantine Lignos\n  and Chester Palen-Michel and Happy Buzaaba and Shruti Rijhwani and Sebastian Ruder and Stephen Mayhew and\n  Israel Abebe Azime and S. Muhammad and Chris C. Emezue and Joyce Nakatumba-Nabende and Perez Ogayo and\n  Anuoluwapo Aremu and Catherine Gitau and Derguene Mbaye and J. Alabi and Seid Muhie Yimam and Tajuddeen R. Gwadabe and\n  Ignatius Ezeani and Rubungo Andre Niyongabo and Jonathan Mukiibi and V. Otiende and Iroro Orife and Davis David and\n  Samba Ngom and Tosin P. Adewumi and Paul Rayson and Mofetoluwa Adeyemi and Gerald Muriuki and Emmanuel Anebi and\n  C. Chukwuneke and N. Odu and Eric Peter Wairagala and S. Oyerinde and Clemencia Siro and Tobius Saul Bateesa and\n  Temilola Oloyede and Yvonne Wambui and Victor Akinode and Deborah Nabagereka and Maurice Katusiime and\n  Ayodele Awokoya and Mouhamadane Mboup and D. Gebreyohannes and Henok Tilaye and Kelechi Nwaike and Degaga Wolde and\n   Abdoulaye Faye and Blessing Sibanda and Orevaoghene Ahia and Bonaventure F. P. Dossou and Kelechi Ogueji and\n   Thierno Ibrahima Diop and A. Diallo and Adewale Akinfaderin and T. Marengereke and Salomey Osei},\n  journal={ArXiv},\n  year={2021},\n  volume={abs/2103.11811}\n}\n", "homepage": "https://arxiv.org/abs/2103.11811", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "ner_tags": {"feature": {"num_classes": 9, "names": ["O", "B-PER", "I-PER", "B-ORG", "I-ORG", "B-LOC", "I-LOC", "B-DATE", "I-DATE"], "names_file": null, "id": null, "_type": "ClassLabel"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "masakhaner", "config_name": "kin", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 878746, "num_examples": 2116, "dataset_name": "masakhaner"}, "validation": {"name": "validation", "num_bytes": 120998, "num_examples": 302, "dataset_name": "masakhaner"}, "test": {"name": "test", "num_bytes": 258638, "num_examples": 605, "dataset_name": "masakhaner"}}, "download_checksums": {"https://github.com/masakhane-io/masakhane-ner/raw/main/data/kin/train.txt": {"num_bytes": 442520, "checksum": "46960b853d33b08759ec04dea5ba390ac196573b9fb868556aa3ed9ecd613d5a"}, "https://github.com/masakhane-io/masakhane-ner/raw/main/data/kin/dev.txt": {"num_bytes": 61150, "checksum": "2adff8ef6a84927a9ebf68760feaffba8ff4d220fde89d6da88e084287c6cae9"}, "https://github.com/masakhane-io/masakhane-ner/raw/main/data/kin/test.txt": {"num_bytes": 129354, "checksum": "d49c012b50ef2e1e5f6e424f4348d18ebc171298dd0eb34ec5a000a353f3d98f"}}, "download_size": 633024, "post_processing_size": null, "dataset_size": 1258382, "size_in_bytes": 1891406}, "lug": {"description": "MasakhaNER is the first large publicly available high-quality dataset for named entity recognition (NER) in ten African languages.\n\nNamed entities are phrases that contain the names of persons, organizations, locations, times and quantities.\n\nExample:\n[PER Wolff] , currently a journalist in [LOC Argentina] , played with [PER Del Bosque] in the final years of the seventies in [ORG Real Madrid] .\nMasakhaNER is a named entity dataset consisting of PER, ORG, LOC, and DATE entities annotated by Masakhane for ten African languages:\n- Amharic\n- Hausa\n- Igbo\n- Kinyarwanda\n- Luganda\n- Luo\n- Nigerian-Pidgin\n- Swahili\n- Wolof\n- Yoruba\n\nThe train/validation/test sets are available for all the ten languages.\n\nFor more details see https://arxiv.org/abs/2103.11811\n", "citation": "@article{Adelani2021MasakhaNERNE,\n  title={MasakhaNER: Named Entity Recognition for African Languages},\n  author={D. Adelani and Jade Abbott and Graham Neubig and Daniel D'Souza and Julia Kreutzer and Constantine Lignos\n  and Chester Palen-Michel and Happy Buzaaba and Shruti Rijhwani and Sebastian Ruder and Stephen Mayhew and\n  Israel Abebe Azime and S. Muhammad and Chris C. Emezue and Joyce Nakatumba-Nabende and Perez Ogayo and\n  Anuoluwapo Aremu and Catherine Gitau and Derguene Mbaye and J. Alabi and Seid Muhie Yimam and Tajuddeen R. Gwadabe and\n  Ignatius Ezeani and Rubungo Andre Niyongabo and Jonathan Mukiibi and V. Otiende and Iroro Orife and Davis David and\n  Samba Ngom and Tosin P. Adewumi and Paul Rayson and Mofetoluwa Adeyemi and Gerald Muriuki and Emmanuel Anebi and\n  C. Chukwuneke and N. Odu and Eric Peter Wairagala and S. Oyerinde and Clemencia Siro and Tobius Saul Bateesa and\n  Temilola Oloyede and Yvonne Wambui and Victor Akinode and Deborah Nabagereka and Maurice Katusiime and\n  Ayodele Awokoya and Mouhamadane Mboup and D. Gebreyohannes and Henok Tilaye and Kelechi Nwaike and Degaga Wolde and\n   Abdoulaye Faye and Blessing Sibanda and Orevaoghene Ahia and Bonaventure F. P. Dossou and Kelechi Ogueji and\n   Thierno Ibrahima Diop and A. Diallo and Adewale Akinfaderin and T. Marengereke and Salomey Osei},\n  journal={ArXiv},\n  year={2021},\n  volume={abs/2103.11811}\n}\n", "homepage": "https://arxiv.org/abs/2103.11811", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "ner_tags": {"feature": {"num_classes": 9, "names": ["O", "B-PER", "I-PER", "B-ORG", "I-ORG", "B-LOC", "I-LOC", "B-DATE", "I-DATE"], "names_file": null, "id": null, "_type": "ClassLabel"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "masakhaner", "config_name": "lug", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 611917, "num_examples": 1428, "dataset_name": "masakhaner"}, "validation": {"name": "validation", "num_bytes": 70058, "num_examples": 200, "dataset_name": "masakhaner"}, "test": {"name": "test", "num_bytes": 183063, "num_examples": 407, "dataset_name": "masakhaner"}}, "download_checksums": {"https://github.com/masakhane-io/masakhane-ner/raw/main/data/lug/train.txt": {"num_bytes": 315679, "checksum": "5884e3c4be037b37dec9c1d60125638dad3510c81f903402d9c31f023deaf7fa"}, "https://github.com/masakhane-io/masakhane-ner/raw/main/data/lug/dev.txt": {"num_bytes": 35254, "checksum": "654a457c9c7aaba690a6cab0a72390605288fd46668e3c17fa5f84596bbc740c"}, "https://github.com/masakhane-io/masakhane-ner/raw/main/data/lug/test.txt": {"num_bytes": 94822, "checksum": "6b8b327cecc61d7228be95331e6bf3bbdeea84a3d023172359f0aa1c89b52ecc"}}, "download_size": 445755, "post_processing_size": null, "dataset_size": 865038, "size_in_bytes": 1310793}, "luo": {"description": "MasakhaNER is the first large publicly available high-quality dataset for named entity recognition (NER) in ten African languages.\n\nNamed entities are phrases that contain the names of persons, organizations, locations, times and quantities.\n\nExample:\n[PER Wolff] , currently a journalist in [LOC Argentina] , played with [PER Del Bosque] in the final years of the seventies in [ORG Real Madrid] .\nMasakhaNER is a named entity dataset consisting of PER, ORG, LOC, and DATE entities annotated by Masakhane for ten African languages:\n- Amharic\n- Hausa\n- Igbo\n- Kinyarwanda\n- Luganda\n- Luo\n- Nigerian-Pidgin\n- Swahili\n- Wolof\n- Yoruba\n\nThe train/validation/test sets are available for all the ten languages.\n\nFor more details see https://arxiv.org/abs/2103.11811\n", "citation": "@article{Adelani2021MasakhaNERNE,\n  title={MasakhaNER: Named Entity Recognition for African Languages},\n  author={D. Adelani and Jade Abbott and Graham Neubig and Daniel D'Souza and Julia Kreutzer and Constantine Lignos\n  and Chester Palen-Michel and Happy Buzaaba and Shruti Rijhwani and Sebastian Ruder and Stephen Mayhew and\n  Israel Abebe Azime and S. Muhammad and Chris C. Emezue and Joyce Nakatumba-Nabende and Perez Ogayo and\n  Anuoluwapo Aremu and Catherine Gitau and Derguene Mbaye and J. Alabi and Seid Muhie Yimam and Tajuddeen R. Gwadabe and\n  Ignatius Ezeani and Rubungo Andre Niyongabo and Jonathan Mukiibi and V. Otiende and Iroro Orife and Davis David and\n  Samba Ngom and Tosin P. Adewumi and Paul Rayson and Mofetoluwa Adeyemi and Gerald Muriuki and Emmanuel Anebi and\n  C. Chukwuneke and N. Odu and Eric Peter Wairagala and S. Oyerinde and Clemencia Siro and Tobius Saul Bateesa and\n  Temilola Oloyede and Yvonne Wambui and Victor Akinode and Deborah Nabagereka and Maurice Katusiime and\n  Ayodele Awokoya and Mouhamadane Mboup and D. Gebreyohannes and Henok Tilaye and Kelechi Nwaike and Degaga Wolde and\n   Abdoulaye Faye and Blessing Sibanda and Orevaoghene Ahia and Bonaventure F. P. Dossou and Kelechi Ogueji and\n   Thierno Ibrahima Diop and A. Diallo and Adewale Akinfaderin and T. Marengereke and Salomey Osei},\n  journal={ArXiv},\n  year={2021},\n  volume={abs/2103.11811}\n}\n", "homepage": "https://arxiv.org/abs/2103.11811", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "ner_tags": {"feature": {"num_classes": 9, "names": ["O", "B-PER", "I-PER", "B-ORG", "I-ORG", "B-LOC", "I-LOC", "B-DATE", "I-DATE"], "names_file": null, "id": null, "_type": "ClassLabel"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "masakhaner", "config_name": "luo", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 314995, "num_examples": 644, "dataset_name": "masakhaner"}, "validation": {"name": "validation", "num_bytes": 43506, "num_examples": 92, "dataset_name": "masakhaner"}, "test": {"name": "test", "num_bytes": 87716, "num_examples": 186, "dataset_name": "masakhaner"}}, "download_checksums": {"https://github.com/masakhane-io/masakhane-ner/raw/main/data/luo/train.txt": {"num_bytes": 150174, "checksum": "a676dba1a88bfbf352150145855252f7c47c271c37da8c2d09fb3560e899fcd9"}, "https://github.com/masakhane-io/masakhane-ner/raw/main/data/luo/dev.txt": {"num_bytes": 20777, "checksum": "37a270c722d1c5462907222bf3ff1cf2453075490f00867a4c8d90838080416a"}, "https://github.com/masakhane-io/masakhane-ner/raw/main/data/luo/test.txt": {"num_bytes": 42330, "checksum": "9342c9a99a17b08daee32fe9ed414ab1e4810db886ba4e8b1c9f5522438479df"}}, "download_size": 213281, "post_processing_size": null, "dataset_size": 446217, "size_in_bytes": 659498}, "pcm": {"description": "MasakhaNER is the first large publicly available high-quality dataset for named entity recognition (NER) in ten African languages.\n\nNamed entities are phrases that contain the names of persons, organizations, locations, times and quantities.\n\nExample:\n[PER Wolff] , currently a journalist in [LOC Argentina] , played with [PER Del Bosque] in the final years of the seventies in [ORG Real Madrid] .\nMasakhaNER is a named entity dataset consisting of PER, ORG, LOC, and DATE entities annotated by Masakhane for ten African languages:\n- Amharic\n- Hausa\n- Igbo\n- Kinyarwanda\n- Luganda\n- Luo\n- Nigerian-Pidgin\n- Swahili\n- Wolof\n- Yoruba\n\nThe train/validation/test sets are available for all the ten languages.\n\nFor more details see https://arxiv.org/abs/2103.11811\n", "citation": "@article{Adelani2021MasakhaNERNE,\n  title={MasakhaNER: Named Entity Recognition for African Languages},\n  author={D. Adelani and Jade Abbott and Graham Neubig and Daniel D'Souza and Julia Kreutzer and Constantine Lignos\n  and Chester Palen-Michel and Happy Buzaaba and Shruti Rijhwani and Sebastian Ruder and Stephen Mayhew and\n  Israel Abebe Azime and S. Muhammad and Chris C. Emezue and Joyce Nakatumba-Nabende and Perez Ogayo and\n  Anuoluwapo Aremu and Catherine Gitau and Derguene Mbaye and J. Alabi and Seid Muhie Yimam and Tajuddeen R. Gwadabe and\n  Ignatius Ezeani and Rubungo Andre Niyongabo and Jonathan Mukiibi and V. Otiende and Iroro Orife and Davis David and\n  Samba Ngom and Tosin P. Adewumi and Paul Rayson and Mofetoluwa Adeyemi and Gerald Muriuki and Emmanuel Anebi and\n  C. Chukwuneke and N. Odu and Eric Peter Wairagala and S. Oyerinde and Clemencia Siro and Tobius Saul Bateesa and\n  Temilola Oloyede and Yvonne Wambui and Victor Akinode and Deborah Nabagereka and Maurice Katusiime and\n  Ayodele Awokoya and Mouhamadane Mboup and D. Gebreyohannes and Henok Tilaye and Kelechi Nwaike and Degaga Wolde and\n   Abdoulaye Faye and Blessing Sibanda and Orevaoghene Ahia and Bonaventure F. P. Dossou and Kelechi Ogueji and\n   Thierno Ibrahima Diop and A. Diallo and Adewale Akinfaderin and T. Marengereke and Salomey Osei},\n  journal={ArXiv},\n  year={2021},\n  volume={abs/2103.11811}\n}\n", "homepage": "https://arxiv.org/abs/2103.11811", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "ner_tags": {"feature": {"num_classes": 9, "names": ["O", "B-PER", "I-PER", "B-ORG", "I-ORG", "B-LOC", "I-LOC", "B-DATE", "I-DATE"], "names_file": null, "id": null, "_type": "ClassLabel"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "masakhaner", "config_name": "pcm", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 868229, "num_examples": 2124, "dataset_name": "masakhaner"}, "validation": {"name": "validation", "num_bytes": 126829, "num_examples": 306, "dataset_name": "masakhaner"}, "test": {"name": "test", "num_bytes": 262185, "num_examples": 600, "dataset_name": "masakhaner"}}, "download_checksums": {"https://github.com/masakhane-io/masakhane-ner/raw/main/data/pcm/train.txt": {"num_bytes": 395332, "checksum": "7fbb8bc4f0b456e6283624d05a1bfd0e331c580ebf9564815d6f5b7453588272"}, "https://github.com/masakhane-io/masakhane-ner/raw/main/data/pcm/dev.txt": {"num_bytes": 57178, "checksum": "34e2dc4868718fbb3646f2d65d3e7802866d4c17bde005ff28bfc3960de2f0bc"}, "https://github.com/masakhane-io/masakhane-ner/raw/main/data/pcm/test.txt": {"num_bytes": 119544, "checksum": "de0dc15f4f74adff9486d03897354ef30ded9bedb73e058a96e3d1bd0a0c85fc"}}, "download_size": 572054, "post_processing_size": null, "dataset_size": 1257243, "size_in_bytes": 1829297}, "swa": {"description": "MasakhaNER is the first large publicly available high-quality dataset for named entity recognition (NER) in ten African languages.\n\nNamed entities are phrases that contain the names of persons, organizations, locations, times and quantities.\n\nExample:\n[PER Wolff] , currently a journalist in [LOC Argentina] , played with [PER Del Bosque] in the final years of the seventies in [ORG Real Madrid] .\nMasakhaNER is a named entity dataset consisting of PER, ORG, LOC, and DATE entities annotated by Masakhane for ten African languages:\n- Amharic\n- Hausa\n- Igbo\n- Kinyarwanda\n- Luganda\n- Luo\n- Nigerian-Pidgin\n- Swahili\n- Wolof\n- Yoruba\n\nThe train/validation/test sets are available for all the ten languages.\n\nFor more details see https://arxiv.org/abs/2103.11811\n", "citation": "@article{Adelani2021MasakhaNERNE,\n  title={MasakhaNER: Named Entity Recognition for African Languages},\n  author={D. Adelani and Jade Abbott and Graham Neubig and Daniel D'Souza and Julia Kreutzer and Constantine Lignos\n  and Chester Palen-Michel and Happy Buzaaba and Shruti Rijhwani and Sebastian Ruder and Stephen Mayhew and\n  Israel Abebe Azime and S. Muhammad and Chris C. Emezue and Joyce Nakatumba-Nabende and Perez Ogayo and\n  Anuoluwapo Aremu and Catherine Gitau and Derguene Mbaye and J. Alabi and Seid Muhie Yimam and Tajuddeen R. Gwadabe and\n  Ignatius Ezeani and Rubungo Andre Niyongabo and Jonathan Mukiibi and V. Otiende and Iroro Orife and Davis David and\n  Samba Ngom and Tosin P. Adewumi and Paul Rayson and Mofetoluwa Adeyemi and Gerald Muriuki and Emmanuel Anebi and\n  C. Chukwuneke and N. Odu and Eric Peter Wairagala and S. Oyerinde and Clemencia Siro and Tobius Saul Bateesa and\n  Temilola Oloyede and Yvonne Wambui and Victor Akinode and Deborah Nabagereka and Maurice Katusiime and\n  Ayodele Awokoya and Mouhamadane Mboup and D. Gebreyohannes and Henok Tilaye and Kelechi Nwaike and Degaga Wolde and\n   Abdoulaye Faye and Blessing Sibanda and Orevaoghene Ahia and Bonaventure F. P. Dossou and Kelechi Ogueji and\n   Thierno Ibrahima Diop and A. Diallo and Adewale Akinfaderin and T. Marengereke and Salomey Osei},\n  journal={ArXiv},\n  year={2021},\n  volume={abs/2103.11811}\n}\n", "homepage": "https://arxiv.org/abs/2103.11811", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "ner_tags": {"feature": {"num_classes": 9, "names": ["O", "B-PER", "I-PER", "B-ORG", "I-ORG", "B-LOC", "I-LOC", "B-DATE", "I-DATE"], "names_file": null, "id": null, "_type": "ClassLabel"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "masakhaner", "config_name": "swa", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 1001120, "num_examples": 2109, "dataset_name": "masakhaner"}, "validation": {"name": "validation", "num_bytes": 128563, "num_examples": 300, "dataset_name": "masakhaner"}, "test": {"name": "test", "num_bytes": 272108, "num_examples": 604, "dataset_name": "masakhaner"}}, "download_checksums": {"https://github.com/masakhane-io/masakhane-ner/raw/main/data/swa/train.txt": {"num_bytes": 490917, "checksum": "3fbefb76d97a652a29c6f856946af60e31a92ad82927bb748092c5be1b96cb08"}, "https://github.com/masakhane-io/masakhane-ner/raw/main/data/swa/dev.txt": {"num_bytes": 63078, "checksum": "ee742c38cba9139f1a147baffe7d294d6e38599c67a66ec896500ac2666f53d3"}, "https://github.com/masakhane-io/masakhane-ner/raw/main/data/swa/test.txt": {"num_bytes": 132318, "checksum": "f91b97def896572dfaaf6db247a01cc1a8b7bd2ce537c19956ec0a0a1ac5913c"}}, "download_size": 686313, "post_processing_size": null, "dataset_size": 1401791, "size_in_bytes": 2088104}, "wol": {"description": "MasakhaNER is the first large publicly available high-quality dataset for named entity recognition (NER) in ten African languages.\n\nNamed entities are phrases that contain the names of persons, organizations, locations, times and quantities.\n\nExample:\n[PER Wolff] , currently a journalist in [LOC Argentina] , played with [PER Del Bosque] in the final years of the seventies in [ORG Real Madrid] .\nMasakhaNER is a named entity dataset consisting of PER, ORG, LOC, and DATE entities annotated by Masakhane for ten African languages:\n- Amharic\n- Hausa\n- Igbo\n- Kinyarwanda\n- Luganda\n- Luo\n- Nigerian-Pidgin\n- Swahili\n- Wolof\n- Yoruba\n\nThe train/validation/test sets are available for all the ten languages.\n\nFor more details see https://arxiv.org/abs/2103.11811\n", "citation": "@article{Adelani2021MasakhaNERNE,\n  title={MasakhaNER: Named Entity Recognition for African Languages},\n  author={D. Adelani and Jade Abbott and Graham Neubig and Daniel D'Souza and Julia Kreutzer and Constantine Lignos\n  and Chester Palen-Michel and Happy Buzaaba and Shruti Rijhwani and Sebastian Ruder and Stephen Mayhew and\n  Israel Abebe Azime and S. Muhammad and Chris C. Emezue and Joyce Nakatumba-Nabende and Perez Ogayo and\n  Anuoluwapo Aremu and Catherine Gitau and Derguene Mbaye and J. Alabi and Seid Muhie Yimam and Tajuddeen R. Gwadabe and\n  Ignatius Ezeani and Rubungo Andre Niyongabo and Jonathan Mukiibi and V. Otiende and Iroro Orife and Davis David and\n  Samba Ngom and Tosin P. Adewumi and Paul Rayson and Mofetoluwa Adeyemi and Gerald Muriuki and Emmanuel Anebi and\n  C. Chukwuneke and N. Odu and Eric Peter Wairagala and S. Oyerinde and Clemencia Siro and Tobius Saul Bateesa and\n  Temilola Oloyede and Yvonne Wambui and Victor Akinode and Deborah Nabagereka and Maurice Katusiime and\n  Ayodele Awokoya and Mouhamadane Mboup and D. Gebreyohannes and Henok Tilaye and Kelechi Nwaike and Degaga Wolde and\n   Abdoulaye Faye and Blessing Sibanda and Orevaoghene Ahia and Bonaventure F. P. Dossou and Kelechi Ogueji and\n   Thierno Ibrahima Diop and A. Diallo and Adewale Akinfaderin and T. Marengereke and Salomey Osei},\n  journal={ArXiv},\n  year={2021},\n  volume={abs/2103.11811}\n}\n", "homepage": "https://arxiv.org/abs/2103.11811", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "ner_tags": {"feature": {"num_classes": 9, "names": ["O", "B-PER", "I-PER", "B-ORG", "I-ORG", "B-LOC", "I-LOC", "B-DATE", "I-DATE"], "names_file": null, "id": null, "_type": "ClassLabel"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "masakhaner", "config_name": "wol", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 602076, "num_examples": 1871, "dataset_name": "masakhaner"}, "validation": {"name": "validation", "num_bytes": 71535, "num_examples": 267, "dataset_name": "masakhaner"}, "test": {"name": "test", "num_bytes": 191484, "num_examples": 539, "dataset_name": "masakhaner"}}, "download_checksums": {"https://github.com/masakhane-io/masakhane-ner/raw/main/data/wol/train.txt": {"num_bytes": 252788, "checksum": "160b946c5944313135cd0893030f9f2e5f305e1195e763124bfffd30eac787e8"}, "https://github.com/masakhane-io/masakhane-ner/raw/main/data/wol/dev.txt": {"num_bytes": 29058, "checksum": "48517a2ad733938805be4bdd285518bed75f14d62068abed74c7eece550e9630"}, "https://github.com/masakhane-io/masakhane-ner/raw/main/data/wol/test.txt": {"num_bytes": 82617, "checksum": "4791d2b105f1022c6b9180b7d96d10d4daa864d7f7087f1822b55ddbdd22b931"}}, "download_size": 364463, "post_processing_size": null, "dataset_size": 865095, "size_in_bytes": 1229558}, "yor": {"description": "MasakhaNER is the first large publicly available high-quality dataset for named entity recognition (NER) in ten African languages.\n\nNamed entities are phrases that contain the names of persons, organizations, locations, times and quantities.\n\nExample:\n[PER Wolff] , currently a journalist in [LOC Argentina] , played with [PER Del Bosque] in the final years of the seventies in [ORG Real Madrid] .\nMasakhaNER is a named entity dataset consisting of PER, ORG, LOC, and DATE entities annotated by Masakhane for ten African languages:\n- Amharic\n- Hausa\n- Igbo\n- Kinyarwanda\n- Luganda\n- Luo\n- Nigerian-Pidgin\n- Swahili\n- Wolof\n- Yoruba\n\nThe train/validation/test sets are available for all the ten languages.\n\nFor more details see https://arxiv.org/abs/2103.11811\n", "citation": "@article{Adelani2021MasakhaNERNE,\n  title={MasakhaNER: Named Entity Recognition for African Languages},\n  author={D. Adelani and Jade Abbott and Graham Neubig and Daniel D'Souza and Julia Kreutzer and Constantine Lignos\n  and Chester Palen-Michel and Happy Buzaaba and Shruti Rijhwani and Sebastian Ruder and Stephen Mayhew and\n  Israel Abebe Azime and S. Muhammad and Chris C. Emezue and Joyce Nakatumba-Nabende and Perez Ogayo and\n  Anuoluwapo Aremu and Catherine Gitau and Derguene Mbaye and J. Alabi and Seid Muhie Yimam and Tajuddeen R. Gwadabe and\n  Ignatius Ezeani and Rubungo Andre Niyongabo and Jonathan Mukiibi and V. Otiende and Iroro Orife and Davis David and\n  Samba Ngom and Tosin P. Adewumi and Paul Rayson and Mofetoluwa Adeyemi and Gerald Muriuki and Emmanuel Anebi and\n  C. Chukwuneke and N. Odu and Eric Peter Wairagala and S. Oyerinde and Clemencia Siro and Tobius Saul Bateesa and\n  Temilola Oloyede and Yvonne Wambui and Victor Akinode and Deborah Nabagereka and Maurice Katusiime and\n  Ayodele Awokoya and Mouhamadane Mboup and D. Gebreyohannes and Henok Tilaye and Kelechi Nwaike and Degaga Wolde and\n   Abdoulaye Faye and Blessing Sibanda and Orevaoghene Ahia and Bonaventure F. P. Dossou and Kelechi Ogueji and\n   Thierno Ibrahima Diop and A. Diallo and Adewale Akinfaderin and T. Marengereke and Salomey Osei},\n  journal={ArXiv},\n  year={2021},\n  volume={abs/2103.11811}\n}\n", "homepage": "https://arxiv.org/abs/2103.11811", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "ner_tags": {"feature": {"num_classes": 9, "names": ["O", "B-PER", "I-PER", "B-ORG", "I-ORG", "B-LOC", "I-LOC", "B-DATE", "I-DATE"], "names_file": null, "id": null, "_type": "ClassLabel"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "masakhaner", "config_name": "yor", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 1016741, "num_examples": 2171, "dataset_name": "masakhaner"}, "validation": {"name": "validation", "num_bytes": 127415, "num_examples": 305, "dataset_name": "masakhaner"}, "test": {"name": "test", "num_bytes": 359519, "num_examples": 645, "dataset_name": "masakhaner"}}, "download_checksums": {"https://github.com/masakhane-io/masakhane-ner/raw/main/data/yor/train.txt": {"num_bytes": 506380, "checksum": "8baf42a3231ab6f10ce7ed5ed434a5a5447634d1cdf979ac932b2496e3566784"}, "https://github.com/masakhane-io/masakhane-ner/raw/main/data/yor/dev.txt": {"num_bytes": 62819, "checksum": "6fd8ba4c586606a08669ca56954185092f419ddf9c2c4d950815a1cb03af3832"}, "https://github.com/masakhane-io/masakhane-ner/raw/main/data/yor/test.txt": {"num_bytes": 182311, "checksum": "fbace84a8b7cc0c4078f1600a2408b06b3d182977bb64d55318590cba56c2058"}}, "download_size": 751510, "post_processing_size": null, "dataset_size": 1503675, "size_in_bytes": 2255185}, "en-amh": {"description": "MAFAND-MT is the largest MT benchmark for African languages in the news domain, covering 21 languages. The languages covered are: \n- Amharic\n- Bambara\n- Ghomala\n- Ewe\n- Fon\n- Hausa\n- Igbo\n- Kinyarwanda\n- Luganda\n- Luo\n- Mossi\n- Nigerian-Pidgin\n- Chichewa\n- Shona\n- Swahili\n- Setswana\n- Twi\n- Wolof\n- Xhosa\n- Yoruba\n- Zulu\n\nThe train/validation/test sets are available for 16 languages, and validation/test set for amh, kin, nya, sna, and xho\n\nFor more details see https://aclanthology.org/2022.naacl-main.223/\n", "citation": "@inproceedings{adelani-etal-2022-thousand,\n    title = \"A Few Thousand Translations Go a Long Way! Leveraging Pre-trained Models for {A}frican News Translation\",\n    author = \"Adelani, David  and\n      Alabi, Jesujoba  and\n      Fan, Angela  and\n      Kreutzer, Julia  and\n      Shen, Xiaoyu  and\n      Reid, Machel  and\n      Ruiter, Dana  and\n      Klakow, Dietrich  and\n      Nabende, Peter  and\n      Chang, Ernie  and\n      Gwadabe, Tajuddeen  and\n      Sackey, Freshia  and\n      Dossou, Bonaventure F. P.  and\n      Emezue, Chris  and\n      Leong, Colin  and\n      Beukman, Michael  and\n      Muhammad, Shamsuddeen  and\n      Jarso, Guyo  and\n      Yousuf, Oreen  and\n      Niyongabo Rubungo, Andre  and\n      Hacheme, Gilles  and\n      Wairagala, Eric Peter  and\n      Nasir, Muhammad Umair  and\n      Ajibade, Benjamin  and\n      Ajayi, Tunde  and\n      Gitau, Yvonne  and\n      Abbott, Jade  and\n      Ahmed, Mohamed  and\n      Ochieng, Millicent  and\n      Aremu, Anuoluwapo  and\n      Ogayo, Perez  and\n      Mukiibi, Jonathan  and\n      Ouoba Kabore, Fatoumata  and\n      Kalipe, Godson  and\n      Mbaye, Derguene  and\n      Tapo, Allahsera Auguste  and\n      Memdjokam Koagne, Victoire  and\n      Munkoh-Buabeng, Edwin  and\n      Wagner, Valencia  and\n      Abdulmumin, Idris  and\n      Awokoya, Ayodele  and\n      Buzaaba, Happy  and\n      Sibanda, Blessing  and\n      Bukula, Andiswa  and\n      Manthalu, Sam\",\n    booktitle = \"Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies\",\n    month = jul,\n    year = \"2022\",\n    address = \"Seattle, United States\",\n    publisher = \"Association for Computational Linguistics\",\n    url = \"https://aclanthology.org/2022.naacl-main.223\",\n    doi = \"10.18653/v1/2022.naacl-main.223\",\n    pages = \"3053--3070\",\n    abstract = \"Recent advances in the pre-training for language models leverage large-scale datasets to create multilingual models. However, low-resource languages are mostly left out in these datasets. This is primarily because many widely spoken languages that are not well represented on the web and therefore excluded from the large-scale crawls for datasets. Furthermore, downstream users of these models are restricted to the selection of languages originally chosen for pre-training. This work investigates how to optimally leverage existing pre-trained models to create low-resource translation systems for 16 African languages. We focus on two questions: 1) How can pre-trained models be used for languages not included in the initial pretraining? and 2) How can the resulting translation models effectively transfer to new domains? To answer these questions, we create a novel African news corpus covering 16 languages, of which eight languages are not part of any existing evaluation dataset. We demonstrate that the most effective strategy for transferring both additional languages and additional domains is to leverage small quantities of high-quality translation data to fine-tune large pre-trained models.\",\n}\n", "homepage": "https://github.com/masakhane-io/lafand-mt", "license": "", "features": {"translation": {"languages": ["en", "amh"], "id": null, "_type": "Translation"}}, "post_processed": null, "supervised_keys": {"input": "en", "output": "amh"}, "task_templates": null, "builder_name": "mafand", "config_name": "en-amh", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"validation": {"name": "validation", "num_bytes": 142259, "num_examples": 899, "dataset_name": "mafand"}, "test": {"name": "test", "num_bytes": 114922, "num_examples": 1037, "dataset_name": "mafand"}}, "download_checksums": {"https://raw.githubusercontent.com/masakhane-io/lafand-mt/main/data/json_files/en-amh/dev.json": {"num_bytes": 386223, "checksum": "9d650265aeaccafeb67296c0020a2984bdbc062221c5a0aa88241e47512b5796"}, "https://raw.githubusercontent.com/masakhane-io/lafand-mt/main/data/json_files/en-amh/test.json": {"num_bytes": 317617, "checksum": "e011f98888b10729c76e9b02f06164e86211396aa32ea459c8c0d013dd5c420a"}}, "download_size": 703840, "post_processing_size": null, "dataset_size": 257181, "size_in_bytes": 961021}, "en-hau": {"description": "MAFAND-MT is the largest MT benchmark for African languages in the news domain, covering 21 languages. The languages covered are: \n- Amharic\n- Bambara\n- Ghomala\n- Ewe\n- Fon\n- Hausa\n- Igbo\n- Kinyarwanda\n- Luganda\n- Luo\n- Mossi\n- Nigerian-Pidgin\n- Chichewa\n- Shona\n- Swahili\n- Setswana\n- Twi\n- Wolof\n- Xhosa\n- Yoruba\n- Zulu\n\nThe train/validation/test sets are available for 16 languages, and validation/test set for amh, kin, nya, sna, and xho\n\nFor more details see https://aclanthology.org/2022.naacl-main.223/\n", "citation": "@inproceedings{adelani-etal-2022-thousand,\n    title = \"A Few Thousand Translations Go a Long Way! Leveraging Pre-trained Models for {A}frican News Translation\",\n    author = \"Adelani, David  and\n      Alabi, Jesujoba  and\n      Fan, Angela  and\n      Kreutzer, Julia  and\n      Shen, Xiaoyu  and\n      Reid, Machel  and\n      Ruiter, Dana  and\n      Klakow, Dietrich  and\n      Nabende, Peter  and\n      Chang, Ernie  and\n      Gwadabe, Tajuddeen  and\n      Sackey, Freshia  and\n      Dossou, Bonaventure F. P.  and\n      Emezue, Chris  and\n      Leong, Colin  and\n      Beukman, Michael  and\n      Muhammad, Shamsuddeen  and\n      Jarso, Guyo  and\n      Yousuf, Oreen  and\n      Niyongabo Rubungo, Andre  and\n      Hacheme, Gilles  and\n      Wairagala, Eric Peter  and\n      Nasir, Muhammad Umair  and\n      Ajibade, Benjamin  and\n      Ajayi, Tunde  and\n      Gitau, Yvonne  and\n      Abbott, Jade  and\n      Ahmed, Mohamed  and\n      Ochieng, Millicent  and\n      Aremu, Anuoluwapo  and\n      Ogayo, Perez  and\n      Mukiibi, Jonathan  and\n      Ouoba Kabore, Fatoumata  and\n      Kalipe, Godson  and\n      Mbaye, Derguene  and\n      Tapo, Allahsera Auguste  and\n      Memdjokam Koagne, Victoire  and\n      Munkoh-Buabeng, Edwin  and\n      Wagner, Valencia  and\n      Abdulmumin, Idris  and\n      Awokoya, Ayodele  and\n      Buzaaba, Happy  and\n      Sibanda, Blessing  and\n      Bukula, Andiswa  and\n      Manthalu, Sam\",\n    booktitle = \"Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies\",\n    month = jul,\n    year = \"2022\",\n    address = \"Seattle, United States\",\n    publisher = \"Association for Computational Linguistics\",\n    url = \"https://aclanthology.org/2022.naacl-main.223\",\n    doi = \"10.18653/v1/2022.naacl-main.223\",\n    pages = \"3053--3070\",\n    abstract = \"Recent advances in the pre-training for language models leverage large-scale datasets to create multilingual models. However, low-resource languages are mostly left out in these datasets. This is primarily because many widely spoken languages that are not well represented on the web and therefore excluded from the large-scale crawls for datasets. Furthermore, downstream users of these models are restricted to the selection of languages originally chosen for pre-training. This work investigates how to optimally leverage existing pre-trained models to create low-resource translation systems for 16 African languages. We focus on two questions: 1) How can pre-trained models be used for languages not included in the initial pretraining? and 2) How can the resulting translation models effectively transfer to new domains? To answer these questions, we create a novel African news corpus covering 16 languages, of which eight languages are not part of any existing evaluation dataset. We demonstrate that the most effective strategy for transferring both additional languages and additional domains is to leverage small quantities of high-quality translation data to fine-tune large pre-trained models.\",\n}\n", "homepage": "https://github.com/masakhane-io/lafand-mt", "license": "", "features": {"translation": {"languages": ["en", "hau"], "id": null, "_type": "Translation"}}, "post_processed": null, "supervised_keys": {"input": "en", "output": "hau"}, "task_templates": null, "builder_name": "mafand", "config_name": "en-hau", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 878882, "num_examples": 5865, "dataset_name": "mafand"}, "validation": {"name": "validation", "num_bytes": 198905, "num_examples": 1300, "dataset_name": "mafand"}, "test": {"name": "test", "num_bytes": 220740, "num_examples": 1500, "dataset_name": "mafand"}}, "download_checksums": {"https://raw.githubusercontent.com/masakhane-io/lafand-mt/main/data/json_files/en-hau/train.json": {"num_bytes": 2016183, "checksum": "a9af9184e6de5b1d7a9ec89bc7f2c86add6d6c9b5e11bde7dd9b3ae4bfa01c5a"}, "https://raw.githubusercontent.com/masakhane-io/lafand-mt/main/data/json_files/en-hau/dev.json": {"num_bytes": 462239, "checksum": "10b429752c35eb1fc869800968990b4008d43fff4e4025a0ea65ea7313cfeab2"}, "https://raw.githubusercontent.com/masakhane-io/lafand-mt/main/data/json_files/en-hau/test.json": {"num_bytes": 505781, "checksum": "7f0b44884ab9a22882f3f06c26b43b5fc3ff01b2b33cf3f6090f3861354694f3"}}, "download_size": 2984203, "post_processing_size": null, "dataset_size": 1298527, "size_in_bytes": 4282730}, "en-ibo": {"description": "MAFAND-MT is the largest MT benchmark for African languages in the news domain, covering 21 languages. The languages covered are: \n- Amharic\n- Bambara\n- Ghomala\n- Ewe\n- Fon\n- Hausa\n- Igbo\n- Kinyarwanda\n- Luganda\n- Luo\n- Mossi\n- Nigerian-Pidgin\n- Chichewa\n- Shona\n- Swahili\n- Setswana\n- Twi\n- Wolof\n- Xhosa\n- Yoruba\n- Zulu\n\nThe train/validation/test sets are available for 16 languages, and validation/test set for amh, kin, nya, sna, and xho\n\nFor more details see https://aclanthology.org/2022.naacl-main.223/\n", "citation": "@inproceedings{adelani-etal-2022-thousand,\n    title = \"A Few Thousand Translations Go a Long Way! Leveraging Pre-trained Models for {A}frican News Translation\",\n    author = \"Adelani, David  and\n      Alabi, Jesujoba  and\n      Fan, Angela  and\n      Kreutzer, Julia  and\n      Shen, Xiaoyu  and\n      Reid, Machel  and\n      Ruiter, Dana  and\n      Klakow, Dietrich  and\n      Nabende, Peter  and\n      Chang, Ernie  and\n      Gwadabe, Tajuddeen  and\n      Sackey, Freshia  and\n      Dossou, Bonaventure F. P.  and\n      Emezue, Chris  and\n      Leong, Colin  and\n      Beukman, Michael  and\n      Muhammad, Shamsuddeen  and\n      Jarso, Guyo  and\n      Yousuf, Oreen  and\n      Niyongabo Rubungo, Andre  and\n      Hacheme, Gilles  and\n      Wairagala, Eric Peter  and\n      Nasir, Muhammad Umair  and\n      Ajibade, Benjamin  and\n      Ajayi, Tunde  and\n      Gitau, Yvonne  and\n      Abbott, Jade  and\n      Ahmed, Mohamed  and\n      Ochieng, Millicent  and\n      Aremu, Anuoluwapo  and\n      Ogayo, Perez  and\n      Mukiibi, Jonathan  and\n      Ouoba Kabore, Fatoumata  and\n      Kalipe, Godson  and\n      Mbaye, Derguene  and\n      Tapo, Allahsera Auguste  and\n      Memdjokam Koagne, Victoire  and\n      Munkoh-Buabeng, Edwin  and\n      Wagner, Valencia  and\n      Abdulmumin, Idris  and\n      Awokoya, Ayodele  and\n      Buzaaba, Happy  and\n      Sibanda, Blessing  and\n      Bukula, Andiswa  and\n      Manthalu, Sam\",\n    booktitle = \"Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies\",\n    month = jul,\n    year = \"2022\",\n    address = \"Seattle, United States\",\n    publisher = \"Association for Computational Linguistics\",\n    url = \"https://aclanthology.org/2022.naacl-main.223\",\n    doi = \"10.18653/v1/2022.naacl-main.223\",\n    pages = \"3053--3070\",\n    abstract = \"Recent advances in the pre-training for language models leverage large-scale datasets to create multilingual models. However, low-resource languages are mostly left out in these datasets. This is primarily because many widely spoken languages that are not well represented on the web and therefore excluded from the large-scale crawls for datasets. Furthermore, downstream users of these models are restricted to the selection of languages originally chosen for pre-training. This work investigates how to optimally leverage existing pre-trained models to create low-resource translation systems for 16 African languages. We focus on two questions: 1) How can pre-trained models be used for languages not included in the initial pretraining? and 2) How can the resulting translation models effectively transfer to new domains? To answer these questions, we create a novel African news corpus covering 16 languages, of which eight languages are not part of any existing evaluation dataset. We demonstrate that the most effective strategy for transferring both additional languages and additional domains is to leverage small quantities of high-quality translation data to fine-tune large pre-trained models.\",\n}\n", "homepage": "https://github.com/masakhane-io/lafand-mt", "license": "", "features": {"translation": {"languages": ["en", "ibo"], "id": null, "_type": "Translation"}}, "post_processed": null, "supervised_keys": {"input": "en", "output": "ibo"}, "task_templates": null, "builder_name": "mafand", "config_name": "en-ibo", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 848487, "num_examples": 6998, "dataset_name": "mafand"}, "validation": {"name": "validation", "num_bytes": 136650, "num_examples": 1500, "dataset_name": "mafand"}, "test": {"name": "test", "num_bytes": 133708, "num_examples": 1500, "dataset_name": "mafand"}}, "download_checksums": {"https://raw.githubusercontent.com/masakhane-io/lafand-mt/main/data/json_files/en-ibo/train.json": {"num_bytes": 1945470, "checksum": "5632edde056ba170a864da712624c27189efb612895543cd1073acee108605e2"}, "https://raw.githubusercontent.com/masakhane-io/lafand-mt/main/data/json_files/en-ibo/dev.json": {"num_bytes": 327129, "checksum": "a1577df2a3d475aec5f2d7b256577dcef5d5c2de717b7bdd4d8d9f33836daaba"}, "https://raw.githubusercontent.com/masakhane-io/lafand-mt/main/data/json_files/en-ibo/test.json": {"num_bytes": 318470, "checksum": "1f08bd80e43e2961effc422e5d94fe70d63b99ba53fb57a9507952f54ebdadaa"}}, "download_size": 2591069, "post_processing_size": null, "dataset_size": 1118845, "size_in_bytes": 3709914}, "en-kin": {"description": "MAFAND-MT is the largest MT benchmark for African languages in the news domain, covering 21 languages. The languages covered are: \n- Amharic\n- Bambara\n- Ghomala\n- Ewe\n- Fon\n- Hausa\n- Igbo\n- Kinyarwanda\n- Luganda\n- Luo\n- Mossi\n- Nigerian-Pidgin\n- Chichewa\n- Shona\n- Swahili\n- Setswana\n- Twi\n- Wolof\n- Xhosa\n- Yoruba\n- Zulu\n\nThe train/validation/test sets are available for 16 languages, and validation/test set for amh, kin, nya, sna, and xho\n\nFor more details see https://aclanthology.org/2022.naacl-main.223/\n", "citation": "@inproceedings{adelani-etal-2022-thousand,\n    title = \"A Few Thousand Translations Go a Long Way! Leveraging Pre-trained Models for {A}frican News Translation\",\n    author = \"Adelani, David  and\n      Alabi, Jesujoba  and\n      Fan, Angela  and\n      Kreutzer, Julia  and\n      Shen, Xiaoyu  and\n      Reid, Machel  and\n      Ruiter, Dana  and\n      Klakow, Dietrich  and\n      Nabende, Peter  and\n      Chang, Ernie  and\n      Gwadabe, Tajuddeen  and\n      Sackey, Freshia  and\n      Dossou, Bonaventure F. P.  and\n      Emezue, Chris  and\n      Leong, Colin  and\n      Beukman, Michael  and\n      Muhammad, Shamsuddeen  and\n      Jarso, Guyo  and\n      Yousuf, Oreen  and\n      Niyongabo Rubungo, Andre  and\n      Hacheme, Gilles  and\n      Wairagala, Eric Peter  and\n      Nasir, Muhammad Umair  and\n      Ajibade, Benjamin  and\n      Ajayi, Tunde  and\n      Gitau, Yvonne  and\n      Abbott, Jade  and\n      Ahmed, Mohamed  and\n      Ochieng, Millicent  and\n      Aremu, Anuoluwapo  and\n      Ogayo, Perez  and\n      Mukiibi, Jonathan  and\n      Ouoba Kabore, Fatoumata  and\n      Kalipe, Godson  and\n      Mbaye, Derguene  and\n      Tapo, Allahsera Auguste  and\n      Memdjokam Koagne, Victoire  and\n      Munkoh-Buabeng, Edwin  and\n      Wagner, Valencia  and\n      Abdulmumin, Idris  and\n      Awokoya, Ayodele  and\n      Buzaaba, Happy  and\n      Sibanda, Blessing  and\n      Bukula, Andiswa  and\n      Manthalu, Sam\",\n    booktitle = \"Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies\",\n    month = jul,\n    year = \"2022\",\n    address = \"Seattle, United States\",\n    publisher = \"Association for Computational Linguistics\",\n    url = \"https://aclanthology.org/2022.naacl-main.223\",\n    doi = \"10.18653/v1/2022.naacl-main.223\",\n    pages = \"3053--3070\",\n    abstract = \"Recent advances in the pre-training for language models leverage large-scale datasets to create multilingual models. However, low-resource languages are mostly left out in these datasets. This is primarily because many widely spoken languages that are not well represented on the web and therefore excluded from the large-scale crawls for datasets. Furthermore, downstream users of these models are restricted to the selection of languages originally chosen for pre-training. This work investigates how to optimally leverage existing pre-trained models to create low-resource translation systems for 16 African languages. We focus on two questions: 1) How can pre-trained models be used for languages not included in the initial pretraining? and 2) How can the resulting translation models effectively transfer to new domains? To answer these questions, we create a novel African news corpus covering 16 languages, of which eight languages are not part of any existing evaluation dataset. We demonstrate that the most effective strategy for transferring both additional languages and additional domains is to leverage small quantities of high-quality translation data to fine-tune large pre-trained models.\",\n}\n", "homepage": "https://github.com/masakhane-io/lafand-mt", "license": "", "features": {"translation": {"languages": ["en", "kin"], "id": null, "_type": "Translation"}}, "post_processed": null, "supervised_keys": {"input": "en", "output": "kin"}, "task_templates": null, "builder_name": "mafand", "config_name": "en-kin", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"validation": {"name": "validation", "num_bytes": 98435, "num_examples": 460, "dataset_name": "mafand"}, "test": {"name": "test", "num_bytes": 185010, "num_examples": 1006, "dataset_name": "mafand"}}, "download_checksums": {"https://raw.githubusercontent.com/masakhane-io/lafand-mt/main/data/json_files/en-kin/dev.json": {"num_bytes": 211873, "checksum": "7588202c3b685fdecc48c2669bf5bc06affe2873d20eb63c9c78bf29b7577fb3"}, "https://raw.githubusercontent.com/masakhane-io/lafand-mt/main/data/json_files/en-kin/test.json": {"num_bytes": 402546, "checksum": "878be516c3940d8642a806a6600dfcc21b58b1f9ac3f61a0cb9c445e88987e18"}}, "download_size": 614419, "post_processing_size": null, "dataset_size": 283445, "size_in_bytes": 897864}, "en-lug": {"description": "MAFAND-MT is the largest MT benchmark for African languages in the news domain, covering 21 languages. The languages covered are: \n- Amharic\n- Bambara\n- Ghomala\n- Ewe\n- Fon\n- Hausa\n- Igbo\n- Kinyarwanda\n- Luganda\n- Luo\n- Mossi\n- Nigerian-Pidgin\n- Chichewa\n- Shona\n- Swahili\n- Setswana\n- Twi\n- Wolof\n- Xhosa\n- Yoruba\n- Zulu\n\nThe train/validation/test sets are available for 16 languages, and validation/test set for amh, kin, nya, sna, and xho\n\nFor more details see https://aclanthology.org/2022.naacl-main.223/\n", "citation": "@inproceedings{adelani-etal-2022-thousand,\n    title = \"A Few Thousand Translations Go a Long Way! Leveraging Pre-trained Models for {A}frican News Translation\",\n    author = \"Adelani, David  and\n      Alabi, Jesujoba  and\n      Fan, Angela  and\n      Kreutzer, Julia  and\n      Shen, Xiaoyu  and\n      Reid, Machel  and\n      Ruiter, Dana  and\n      Klakow, Dietrich  and\n      Nabende, Peter  and\n      Chang, Ernie  and\n      Gwadabe, Tajuddeen  and\n      Sackey, Freshia  and\n      Dossou, Bonaventure F. P.  and\n      Emezue, Chris  and\n      Leong, Colin  and\n      Beukman, Michael  and\n      Muhammad, Shamsuddeen  and\n      Jarso, Guyo  and\n      Yousuf, Oreen  and\n      Niyongabo Rubungo, Andre  and\n      Hacheme, Gilles  and\n      Wairagala, Eric Peter  and\n      Nasir, Muhammad Umair  and\n      Ajibade, Benjamin  and\n      Ajayi, Tunde  and\n      Gitau, Yvonne  and\n      Abbott, Jade  and\n      Ahmed, Mohamed  and\n      Ochieng, Millicent  and\n      Aremu, Anuoluwapo  and\n      Ogayo, Perez  and\n      Mukiibi, Jonathan  and\n      Ouoba Kabore, Fatoumata  and\n      Kalipe, Godson  and\n      Mbaye, Derguene  and\n      Tapo, Allahsera Auguste  and\n      Memdjokam Koagne, Victoire  and\n      Munkoh-Buabeng, Edwin  and\n      Wagner, Valencia  and\n      Abdulmumin, Idris  and\n      Awokoya, Ayodele  and\n      Buzaaba, Happy  and\n      Sibanda, Blessing  and\n      Bukula, Andiswa  and\n      Manthalu, Sam\",\n    booktitle = \"Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies\",\n    month = jul,\n    year = \"2022\",\n    address = \"Seattle, United States\",\n    publisher = \"Association for Computational Linguistics\",\n    url = \"https://aclanthology.org/2022.naacl-main.223\",\n    doi = \"10.18653/v1/2022.naacl-main.223\",\n    pages = \"3053--3070\",\n    abstract = \"Recent advances in the pre-training for language models leverage large-scale datasets to create multilingual models. However, low-resource languages are mostly left out in these datasets. This is primarily because many widely spoken languages that are not well represented on the web and therefore excluded from the large-scale crawls for datasets. Furthermore, downstream users of these models are restricted to the selection of languages originally chosen for pre-training. This work investigates how to optimally leverage existing pre-trained models to create low-resource translation systems for 16 African languages. We focus on two questions: 1) How can pre-trained models be used for languages not included in the initial pretraining? and 2) How can the resulting translation models effectively transfer to new domains? To answer these questions, we create a novel African news corpus covering 16 languages, of which eight languages are not part of any existing evaluation dataset. We demonstrate that the most effective strategy for transferring both additional languages and additional domains is to leverage small quantities of high-quality translation data to fine-tune large pre-trained models.\",\n}\n", "homepage": "https://github.com/masakhane-io/lafand-mt", "license": "", "features": {"translation": {"languages": ["en", "lug"], "id": null, "_type": "Translation"}}, "post_processed": null, "supervised_keys": {"input": "en", "output": "lug"}, "task_templates": null, "builder_name": "mafand", "config_name": "en-lug", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 505991, "num_examples": 4075, "dataset_name": "mafand"}, "validation": {"name": "validation", "num_bytes": 168539, "num_examples": 1500, "dataset_name": "mafand"}, "test": {"name": "test", "num_bytes": 164940, "num_examples": 1500, "dataset_name": "mafand"}}, "download_checksums": {"https://raw.githubusercontent.com/masakhane-io/lafand-mt/main/data/json_files/en-lug/train.json": {"num_bytes": 1111251, "checksum": "9a44376795fe82cf1cff2f4ca6003bcd9a7ccb64d70290eed0d983d4c55e15fe"}, "https://raw.githubusercontent.com/masakhane-io/lafand-mt/main/data/json_files/en-lug/dev.json": {"num_bytes": 378421, "checksum": "ad0383d5c406409e8ccd61e9a048867d8961e5236993ff67cfdf4d87b346890b"}, "https://raw.githubusercontent.com/masakhane-io/lafand-mt/main/data/json_files/en-lug/test.json": {"num_bytes": 372528, "checksum": "50bc3aa1e6ae49877b6611b999b843745f5a1e53097fd3e6f9f3319fb368ca1a"}}, "download_size": 1862200, "post_processing_size": null, "dataset_size": 839470, "size_in_bytes": 2701670}, "en-nya": {"description": "MAFAND-MT is the largest MT benchmark for African languages in the news domain, covering 21 languages. The languages covered are: \n- Amharic\n- Bambara\n- Ghomala\n- Ewe\n- Fon\n- Hausa\n- Igbo\n- Kinyarwanda\n- Luganda\n- Luo\n- Mossi\n- Nigerian-Pidgin\n- Chichewa\n- Shona\n- Swahili\n- Setswana\n- Twi\n- Wolof\n- Xhosa\n- Yoruba\n- Zulu\n\nThe train/validation/test sets are available for 16 languages, and validation/test set for amh, kin, nya, sna, and xho\n\nFor more details see https://aclanthology.org/2022.naacl-main.223/\n", "citation": "@inproceedings{adelani-etal-2022-thousand,\n    title = \"A Few Thousand Translations Go a Long Way! Leveraging Pre-trained Models for {A}frican News Translation\",\n    author = \"Adelani, David  and\n      Alabi, Jesujoba  and\n      Fan, Angela  and\n      Kreutzer, Julia  and\n      Shen, Xiaoyu  and\n      Reid, Machel  and\n      Ruiter, Dana  and\n      Klakow, Dietrich  and\n      Nabende, Peter  and\n      Chang, Ernie  and\n      Gwadabe, Tajuddeen  and\n      Sackey, Freshia  and\n      Dossou, Bonaventure F. P.  and\n      Emezue, Chris  and\n      Leong, Colin  and\n      Beukman, Michael  and\n      Muhammad, Shamsuddeen  and\n      Jarso, Guyo  and\n      Yousuf, Oreen  and\n      Niyongabo Rubungo, Andre  and\n      Hacheme, Gilles  and\n      Wairagala, Eric Peter  and\n      Nasir, Muhammad Umair  and\n      Ajibade, Benjamin  and\n      Ajayi, Tunde  and\n      Gitau, Yvonne  and\n      Abbott, Jade  and\n      Ahmed, Mohamed  and\n      Ochieng, Millicent  and\n      Aremu, Anuoluwapo  and\n      Ogayo, Perez  and\n      Mukiibi, Jonathan  and\n      Ouoba Kabore, Fatoumata  and\n      Kalipe, Godson  and\n      Mbaye, Derguene  and\n      Tapo, Allahsera Auguste  and\n      Memdjokam Koagne, Victoire  and\n      Munkoh-Buabeng, Edwin  and\n      Wagner, Valencia  and\n      Abdulmumin, Idris  and\n      Awokoya, Ayodele  and\n      Buzaaba, Happy  and\n      Sibanda, Blessing  and\n      Bukula, Andiswa  and\n      Manthalu, Sam\",\n    booktitle = \"Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies\",\n    month = jul,\n    year = \"2022\",\n    address = \"Seattle, United States\",\n    publisher = \"Association for Computational Linguistics\",\n    url = \"https://aclanthology.org/2022.naacl-main.223\",\n    doi = \"10.18653/v1/2022.naacl-main.223\",\n    pages = \"3053--3070\",\n    abstract = \"Recent advances in the pre-training for language models leverage large-scale datasets to create multilingual models. However, low-resource languages are mostly left out in these datasets. This is primarily because many widely spoken languages that are not well represented on the web and therefore excluded from the large-scale crawls for datasets. Furthermore, downstream users of these models are restricted to the selection of languages originally chosen for pre-training. This work investigates how to optimally leverage existing pre-trained models to create low-resource translation systems for 16 African languages. We focus on two questions: 1) How can pre-trained models be used for languages not included in the initial pretraining? and 2) How can the resulting translation models effectively transfer to new domains? To answer these questions, we create a novel African news corpus covering 16 languages, of which eight languages are not part of any existing evaluation dataset. We demonstrate that the most effective strategy for transferring both additional languages and additional domains is to leverage small quantities of high-quality translation data to fine-tune large pre-trained models.\",\n}\n", "homepage": "https://github.com/masakhane-io/lafand-mt", "license": "", "features": {"translation": {"languages": ["en", "nya"], "id": null, "_type": "Translation"}}, "post_processed": null, "supervised_keys": {"input": "en", "output": "nya"}, "task_templates": null, "builder_name": "mafand", "config_name": "en-nya", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"validation": {"name": "validation", "num_bytes": 104269, "num_examples": 483, "dataset_name": "mafand"}, "test": {"name": "test", "num_bytes": 199485, "num_examples": 1004, "dataset_name": "mafand"}}, "download_checksums": {"https://raw.githubusercontent.com/masakhane-io/lafand-mt/main/data/json_files/en-nya/dev.json": {"num_bytes": 234271, "checksum": "bbe84163a83ccf8c493d773330ab522b6194f85fa5646d538ae70e211183a535"}, "https://raw.githubusercontent.com/masakhane-io/lafand-mt/main/data/json_files/en-nya/test.json": {"num_bytes": 438316, "checksum": "4f3cdfd8e865c452652378dbde33db7c93012a3b064548bd4ca605850b05811f"}}, "download_size": 672587, "post_processing_size": null, "dataset_size": 303754, "size_in_bytes": 976341}, "en-pcm": {"description": "MAFAND-MT is the largest MT benchmark for African languages in the news domain, covering 21 languages. The languages covered are: \n- Amharic\n- Bambara\n- Ghomala\n- Ewe\n- Fon\n- Hausa\n- Igbo\n- Kinyarwanda\n- Luganda\n- Luo\n- Mossi\n- Nigerian-Pidgin\n- Chichewa\n- Shona\n- Swahili\n- Setswana\n- Twi\n- Wolof\n- Xhosa\n- Yoruba\n- Zulu\n\nThe train/validation/test sets are available for 16 languages, and validation/test set for amh, kin, nya, sna, and xho\n\nFor more details see https://aclanthology.org/2022.naacl-main.223/\n", "citation": "@inproceedings{adelani-etal-2022-thousand,\n    title = \"A Few Thousand Translations Go a Long Way! Leveraging Pre-trained Models for {A}frican News Translation\",\n    author = \"Adelani, David  and\n      Alabi, Jesujoba  and\n      Fan, Angela  and\n      Kreutzer, Julia  and\n      Shen, Xiaoyu  and\n      Reid, Machel  and\n      Ruiter, Dana  and\n      Klakow, Dietrich  and\n      Nabende, Peter  and\n      Chang, Ernie  and\n      Gwadabe, Tajuddeen  and\n      Sackey, Freshia  and\n      Dossou, Bonaventure F. P.  and\n      Emezue, Chris  and\n      Leong, Colin  and\n      Beukman, Michael  and\n      Muhammad, Shamsuddeen  and\n      Jarso, Guyo  and\n      Yousuf, Oreen  and\n      Niyongabo Rubungo, Andre  and\n      Hacheme, Gilles  and\n      Wairagala, Eric Peter  and\n      Nasir, Muhammad Umair  and\n      Ajibade, Benjamin  and\n      Ajayi, Tunde  and\n      Gitau, Yvonne  and\n      Abbott, Jade  and\n      Ahmed, Mohamed  and\n      Ochieng, Millicent  and\n      Aremu, Anuoluwapo  and\n      Ogayo, Perez  and\n      Mukiibi, Jonathan  and\n      Ouoba Kabore, Fatoumata  and\n      Kalipe, Godson  and\n      Mbaye, Derguene  and\n      Tapo, Allahsera Auguste  and\n      Memdjokam Koagne, Victoire  and\n      Munkoh-Buabeng, Edwin  and\n      Wagner, Valencia  and\n      Abdulmumin, Idris  and\n      Awokoya, Ayodele  and\n      Buzaaba, Happy  and\n      Sibanda, Blessing  and\n      Bukula, Andiswa  and\n      Manthalu, Sam\",\n    booktitle = \"Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies\",\n    month = jul,\n    year = \"2022\",\n    address = \"Seattle, United States\",\n    publisher = \"Association for Computational Linguistics\",\n    url = \"https://aclanthology.org/2022.naacl-main.223\",\n    doi = \"10.18653/v1/2022.naacl-main.223\",\n    pages = \"3053--3070\",\n    abstract = \"Recent advances in the pre-training for language models leverage large-scale datasets to create multilingual models. However, low-resource languages are mostly left out in these datasets. This is primarily because many widely spoken languages that are not well represented on the web and therefore excluded from the large-scale crawls for datasets. Furthermore, downstream users of these models are restricted to the selection of languages originally chosen for pre-training. This work investigates how to optimally leverage existing pre-trained models to create low-resource translation systems for 16 African languages. We focus on two questions: 1) How can pre-trained models be used for languages not included in the initial pretraining? and 2) How can the resulting translation models effectively transfer to new domains? To answer these questions, we create a novel African news corpus covering 16 languages, of which eight languages are not part of any existing evaluation dataset. We demonstrate that the most effective strategy for transferring both additional languages and additional domains is to leverage small quantities of high-quality translation data to fine-tune large pre-trained models.\",\n}\n", "homepage": "https://github.com/masakhane-io/lafand-mt", "license": "", "features": {"translation": {"languages": ["en", "pcm"], "id": null, "_type": "Translation"}}, "post_processed": null, "supervised_keys": {"input": "en", "output": "pcm"}, "task_templates": null, "builder_name": "mafand", "config_name": "en-pcm", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 1253498, "num_examples": 4790, "dataset_name": "mafand"}, "validation": {"name": "validation", "num_bytes": 411057, "num_examples": 1484, "dataset_name": "mafand"}, "test": {"name": "test", "num_bytes": 445529, "num_examples": 1564, "dataset_name": "mafand"}}, "download_checksums": {"https://raw.githubusercontent.com/masakhane-io/lafand-mt/main/data/json_files/en-pcm/train.json": {"num_bytes": 1401994, "checksum": "626f201900a9a95d3144b13d55ee7c8afbae70a8d65d73223e89be6a2303551f"}, "https://raw.githubusercontent.com/masakhane-io/lafand-mt/main/data/json_files/en-pcm/dev.json": {"num_bytes": 457055, "checksum": "6c2aef216115d19f26df311bbd3bd4d8eb00d6610e3a9595b2f7854f5dc33b8b"}, "https://raw.githubusercontent.com/masakhane-io/lafand-mt/main/data/json_files/en-pcm/test.json": {"num_bytes": 494006, "checksum": "e57cc6894fdec342c30a86a6df0e40b32e90339e4fabf64d6b85ccc66a714dc8"}}, "download_size": 2353055, "post_processing_size": null, "dataset_size": 2110084, "size_in_bytes": 4463139}, "en-sna": {"description": "MAFAND-MT is the largest MT benchmark for African languages in the news domain, covering 21 languages. The languages covered are: \n- Amharic\n- Bambara\n- Ghomala\n- Ewe\n- Fon\n- Hausa\n- Igbo\n- Kinyarwanda\n- Luganda\n- Luo\n- Mossi\n- Nigerian-Pidgin\n- Chichewa\n- Shona\n- Swahili\n- Setswana\n- Twi\n- Wolof\n- Xhosa\n- Yoruba\n- Zulu\n\nThe train/validation/test sets are available for 16 languages, and validation/test set for amh, kin, nya, sna, and xho\n\nFor more details see https://aclanthology.org/2022.naacl-main.223/\n", "citation": "@inproceedings{adelani-etal-2022-thousand,\n    title = \"A Few Thousand Translations Go a Long Way! Leveraging Pre-trained Models for {A}frican News Translation\",\n    author = \"Adelani, David  and\n      Alabi, Jesujoba  and\n      Fan, Angela  and\n      Kreutzer, Julia  and\n      Shen, Xiaoyu  and\n      Reid, Machel  and\n      Ruiter, Dana  and\n      Klakow, Dietrich  and\n      Nabende, Peter  and\n      Chang, Ernie  and\n      Gwadabe, Tajuddeen  and\n      Sackey, Freshia  and\n      Dossou, Bonaventure F. P.  and\n      Emezue, Chris  and\n      Leong, Colin  and\n      Beukman, Michael  and\n      Muhammad, Shamsuddeen  and\n      Jarso, Guyo  and\n      Yousuf, Oreen  and\n      Niyongabo Rubungo, Andre  and\n      Hacheme, Gilles  and\n      Wairagala, Eric Peter  and\n      Nasir, Muhammad Umair  and\n      Ajibade, Benjamin  and\n      Ajayi, Tunde  and\n      Gitau, Yvonne  and\n      Abbott, Jade  and\n      Ahmed, Mohamed  and\n      Ochieng, Millicent  and\n      Aremu, Anuoluwapo  and\n      Ogayo, Perez  and\n      Mukiibi, Jonathan  and\n      Ouoba Kabore, Fatoumata  and\n      Kalipe, Godson  and\n      Mbaye, Derguene  and\n      Tapo, Allahsera Auguste  and\n      Memdjokam Koagne, Victoire  and\n      Munkoh-Buabeng, Edwin  and\n      Wagner, Valencia  and\n      Abdulmumin, Idris  and\n      Awokoya, Ayodele  and\n      Buzaaba, Happy  and\n      Sibanda, Blessing  and\n      Bukula, Andiswa  and\n      Manthalu, Sam\",\n    booktitle = \"Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies\",\n    month = jul,\n    year = \"2022\",\n    address = \"Seattle, United States\",\n    publisher = \"Association for Computational Linguistics\",\n    url = \"https://aclanthology.org/2022.naacl-main.223\",\n    doi = \"10.18653/v1/2022.naacl-main.223\",\n    pages = \"3053--3070\",\n    abstract = \"Recent advances in the pre-training for language models leverage large-scale datasets to create multilingual models. However, low-resource languages are mostly left out in these datasets. This is primarily because many widely spoken languages that are not well represented on the web and therefore excluded from the large-scale crawls for datasets. Furthermore, downstream users of these models are restricted to the selection of languages originally chosen for pre-training. This work investigates how to optimally leverage existing pre-trained models to create low-resource translation systems for 16 African languages. We focus on two questions: 1) How can pre-trained models be used for languages not included in the initial pretraining? and 2) How can the resulting translation models effectively transfer to new domains? To answer these questions, we create a novel African news corpus covering 16 languages, of which eight languages are not part of any existing evaluation dataset. We demonstrate that the most effective strategy for transferring both additional languages and additional domains is to leverage small quantities of high-quality translation data to fine-tune large pre-trained models.\",\n}\n", "homepage": "https://github.com/masakhane-io/lafand-mt", "license": "", "features": {"translation": {"languages": ["en", "sna"], "id": null, "_type": "Translation"}}, "post_processed": null, "supervised_keys": {"input": "en", "output": "sna"}, "task_templates": null, "builder_name": "mafand", "config_name": "en-sna", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"validation": {"name": "validation", "num_bytes": 75754, "num_examples": 556, "dataset_name": "mafand"}, "test": {"name": "test", "num_bytes": 141706, "num_examples": 1005, "dataset_name": "mafand"}}, "download_checksums": {"https://raw.githubusercontent.com/masakhane-io/lafand-mt/main/data/json_files/en-sna/dev.json": {"num_bytes": 192626, "checksum": "0675fdfcc1d0a1ffb769004552326c1764cef44165791d8947069bf8fd8a2aac"}, "https://raw.githubusercontent.com/masakhane-io/lafand-mt/main/data/json_files/en-sna/test.json": {"num_bytes": 361833, "checksum": "6ce16925d71d2c025072fdd127ffc91925992d3591eef42a3494b289c0ed36b7"}}, "download_size": 554459, "post_processing_size": null, "dataset_size": 217460, "size_in_bytes": 771919}, "en-swa": {"description": "MAFAND-MT is the largest MT benchmark for African languages in the news domain, covering 21 languages. The languages covered are: \n- Amharic\n- Bambara\n- Ghomala\n- Ewe\n- Fon\n- Hausa\n- Igbo\n- Kinyarwanda\n- Luganda\n- Luo\n- Mossi\n- Nigerian-Pidgin\n- Chichewa\n- Shona\n- Swahili\n- Setswana\n- Twi\n- Wolof\n- Xhosa\n- Yoruba\n- Zulu\n\nThe train/validation/test sets are available for 16 languages, and validation/test set for amh, kin, nya, sna, and xho\n\nFor more details see https://aclanthology.org/2022.naacl-main.223/\n", "citation": "@inproceedings{adelani-etal-2022-thousand,\n    title = \"A Few Thousand Translations Go a Long Way! Leveraging Pre-trained Models for {A}frican News Translation\",\n    author = \"Adelani, David  and\n      Alabi, Jesujoba  and\n      Fan, Angela  and\n      Kreutzer, Julia  and\n      Shen, Xiaoyu  and\n      Reid, Machel  and\n      Ruiter, Dana  and\n      Klakow, Dietrich  and\n      Nabende, Peter  and\n      Chang, Ernie  and\n      Gwadabe, Tajuddeen  and\n      Sackey, Freshia  and\n      Dossou, Bonaventure F. P.  and\n      Emezue, Chris  and\n      Leong, Colin  and\n      Beukman, Michael  and\n      Muhammad, Shamsuddeen  and\n      Jarso, Guyo  and\n      Yousuf, Oreen  and\n      Niyongabo Rubungo, Andre  and\n      Hacheme, Gilles  and\n      Wairagala, Eric Peter  and\n      Nasir, Muhammad Umair  and\n      Ajibade, Benjamin  and\n      Ajayi, Tunde  and\n      Gitau, Yvonne  and\n      Abbott, Jade  and\n      Ahmed, Mohamed  and\n      Ochieng, Millicent  and\n      Aremu, Anuoluwapo  and\n      Ogayo, Perez  and\n      Mukiibi, Jonathan  and\n      Ouoba Kabore, Fatoumata  and\n      Kalipe, Godson  and\n      Mbaye, Derguene  and\n      Tapo, Allahsera Auguste  and\n      Memdjokam Koagne, Victoire  and\n      Munkoh-Buabeng, Edwin  and\n      Wagner, Valencia  and\n      Abdulmumin, Idris  and\n      Awokoya, Ayodele  and\n      Buzaaba, Happy  and\n      Sibanda, Blessing  and\n      Bukula, Andiswa  and\n      Manthalu, Sam\",\n    booktitle = \"Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies\",\n    month = jul,\n    year = \"2022\",\n    address = \"Seattle, United States\",\n    publisher = \"Association for Computational Linguistics\",\n    url = \"https://aclanthology.org/2022.naacl-main.223\",\n    doi = \"10.18653/v1/2022.naacl-main.223\",\n    pages = \"3053--3070\",\n    abstract = \"Recent advances in the pre-training for language models leverage large-scale datasets to create multilingual models. However, low-resource languages are mostly left out in these datasets. This is primarily because many widely spoken languages that are not well represented on the web and therefore excluded from the large-scale crawls for datasets. Furthermore, downstream users of these models are restricted to the selection of languages originally chosen for pre-training. This work investigates how to optimally leverage existing pre-trained models to create low-resource translation systems for 16 African languages. We focus on two questions: 1) How can pre-trained models be used for languages not included in the initial pretraining? and 2) How can the resulting translation models effectively transfer to new domains? To answer these questions, we create a novel African news corpus covering 16 languages, of which eight languages are not part of any existing evaluation dataset. We demonstrate that the most effective strategy for transferring both additional languages and additional domains is to leverage small quantities of high-quality translation data to fine-tune large pre-trained models.\",\n}\n", "homepage": "https://github.com/masakhane-io/lafand-mt", "license": "", "features": {"translation": {"languages": ["en", "swa"], "id": null, "_type": "Translation"}}, "post_processed": null, "supervised_keys": {"input": "en", "output": "swa"}, "task_templates": null, "builder_name": "mafand", "config_name": "en-swa", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 3698641, "num_examples": 30782, "dataset_name": "mafand"}, "validation": {"name": "validation", "num_bytes": 229818, "num_examples": 1791, "dataset_name": "mafand"}, "test": {"name": "test", "num_bytes": 244617, "num_examples": 1835, "dataset_name": "mafand"}}, "download_checksums": {"https://raw.githubusercontent.com/masakhane-io/lafand-mt/main/data/json_files/en-swa/train.json": {"num_bytes": 8447937, "checksum": "c0242f1a34c080502a68521fb71c51fd72046f9f6e9424635f4702c3ad95c0ad"}, "https://raw.githubusercontent.com/masakhane-io/lafand-mt/main/data/json_files/en-swa/dev.json": {"num_bytes": 511062, "checksum": "ab53290c9409255d90fb3962f8242d943cffb42823a528839fbf15837ea4f81b"}, "https://raw.githubusercontent.com/masakhane-io/lafand-mt/main/data/json_files/en-swa/test.json": {"num_bytes": 552699, "checksum": "dbea0373caab6f517d9103130839e38fb78e56316c24b4686f38a25800114bd2"}}, "download_size": 9511698, "post_processing_size": null, "dataset_size": 4173076, "size_in_bytes": 13684774}, "en-tsn": {"description": "MAFAND-MT is the largest MT benchmark for African languages in the news domain, covering 21 languages. The languages covered are: \n- Amharic\n- Bambara\n- Ghomala\n- Ewe\n- Fon\n- Hausa\n- Igbo\n- Kinyarwanda\n- Luganda\n- Luo\n- Mossi\n- Nigerian-Pidgin\n- Chichewa\n- Shona\n- Swahili\n- Setswana\n- Twi\n- Wolof\n- Xhosa\n- Yoruba\n- Zulu\n\nThe train/validation/test sets are available for 16 languages, and validation/test set for amh, kin, nya, sna, and xho\n\nFor more details see https://aclanthology.org/2022.naacl-main.223/\n", "citation": "@inproceedings{adelani-etal-2022-thousand,\n    title = \"A Few Thousand Translations Go a Long Way! Leveraging Pre-trained Models for {A}frican News Translation\",\n    author = \"Adelani, David  and\n      Alabi, Jesujoba  and\n      Fan, Angela  and\n      Kreutzer, Julia  and\n      Shen, Xiaoyu  and\n      Reid, Machel  and\n      Ruiter, Dana  and\n      Klakow, Dietrich  and\n      Nabende, Peter  and\n      Chang, Ernie  and\n      Gwadabe, Tajuddeen  and\n      Sackey, Freshia  and\n      Dossou, Bonaventure F. P.  and\n      Emezue, Chris  and\n      Leong, Colin  and\n      Beukman, Michael  and\n      Muhammad, Shamsuddeen  and\n      Jarso, Guyo  and\n      Yousuf, Oreen  and\n      Niyongabo Rubungo, Andre  and\n      Hacheme, Gilles  and\n      Wairagala, Eric Peter  and\n      Nasir, Muhammad Umair  and\n      Ajibade, Benjamin  and\n      Ajayi, Tunde  and\n      Gitau, Yvonne  and\n      Abbott, Jade  and\n      Ahmed, Mohamed  and\n      Ochieng, Millicent  and\n      Aremu, Anuoluwapo  and\n      Ogayo, Perez  and\n      Mukiibi, Jonathan  and\n      Ouoba Kabore, Fatoumata  and\n      Kalipe, Godson  and\n      Mbaye, Derguene  and\n      Tapo, Allahsera Auguste  and\n      Memdjokam Koagne, Victoire  and\n      Munkoh-Buabeng, Edwin  and\n      Wagner, Valencia  and\n      Abdulmumin, Idris  and\n      Awokoya, Ayodele  and\n      Buzaaba, Happy  and\n      Sibanda, Blessing  and\n      Bukula, Andiswa  and\n      Manthalu, Sam\",\n    booktitle = \"Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies\",\n    month = jul,\n    year = \"2022\",\n    address = \"Seattle, United States\",\n    publisher = \"Association for Computational Linguistics\",\n    url = \"https://aclanthology.org/2022.naacl-main.223\",\n    doi = \"10.18653/v1/2022.naacl-main.223\",\n    pages = \"3053--3070\",\n    abstract = \"Recent advances in the pre-training for language models leverage large-scale datasets to create multilingual models. However, low-resource languages are mostly left out in these datasets. This is primarily because many widely spoken languages that are not well represented on the web and therefore excluded from the large-scale crawls for datasets. Furthermore, downstream users of these models are restricted to the selection of languages originally chosen for pre-training. This work investigates how to optimally leverage existing pre-trained models to create low-resource translation systems for 16 African languages. We focus on two questions: 1) How can pre-trained models be used for languages not included in the initial pretraining? and 2) How can the resulting translation models effectively transfer to new domains? To answer these questions, we create a novel African news corpus covering 16 languages, of which eight languages are not part of any existing evaluation dataset. We demonstrate that the most effective strategy for transferring both additional languages and additional domains is to leverage small quantities of high-quality translation data to fine-tune large pre-trained models.\",\n}\n", "homepage": "https://github.com/masakhane-io/lafand-mt", "license": "", "features": {"translation": {"languages": ["en", "tsn"], "id": null, "_type": "Translation"}}, "post_processed": null, "supervised_keys": {"input": "en", "output": "tsn"}, "task_templates": null, "builder_name": "mafand", "config_name": "en-tsn", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 269740, "num_examples": 2100, "dataset_name": "mafand"}, "validation": {"name": "validation", "num_bytes": 71281, "num_examples": 540, "dataset_name": "mafand"}, "test": {"name": "test", "num_bytes": 196623, "num_examples": 1500, "dataset_name": "mafand"}}, "download_checksums": {"https://raw.githubusercontent.com/masakhane-io/lafand-mt/main/data/json_files/en-tsn/train.json": {"num_bytes": 629359, "checksum": "6d370e1cc7b64285edffe0747c73d8c5132de88f46955c0c29706d83d2048e2b"}, "https://raw.githubusercontent.com/masakhane-io/lafand-mt/main/data/json_files/en-tsn/dev.json": {"num_bytes": 169021, "checksum": "806184d52362e2e56e0824bd307812671e67c36791e154a005863cbfcb209665"}, "https://raw.githubusercontent.com/masakhane-io/lafand-mt/main/data/json_files/en-tsn/test.json": {"num_bytes": 450522, "checksum": "fa5daa725b44f49491fd4b268032ab3e9736cdf561066dacf4d5358a6b701da5"}}, "download_size": 1248902, "post_processing_size": null, "dataset_size": 537644, "size_in_bytes": 1786546}, "en-twi": {"description": "MAFAND-MT is the largest MT benchmark for African languages in the news domain, covering 21 languages. The languages covered are: \n- Amharic\n- Bambara\n- Ghomala\n- Ewe\n- Fon\n- Hausa\n- Igbo\n- Kinyarwanda\n- Luganda\n- Luo\n- Mossi\n- Nigerian-Pidgin\n- Chichewa\n- Shona\n- Swahili\n- Setswana\n- Twi\n- Wolof\n- Xhosa\n- Yoruba\n- Zulu\n\nThe train/validation/test sets are available for 16 languages, and validation/test set for amh, kin, nya, sna, and xho\n\nFor more details see https://aclanthology.org/2022.naacl-main.223/\n", "citation": "@inproceedings{adelani-etal-2022-thousand,\n    title = \"A Few Thousand Translations Go a Long Way! Leveraging Pre-trained Models for {A}frican News Translation\",\n    author = \"Adelani, David  and\n      Alabi, Jesujoba  and\n      Fan, Angela  and\n      Kreutzer, Julia  and\n      Shen, Xiaoyu  and\n      Reid, Machel  and\n      Ruiter, Dana  and\n      Klakow, Dietrich  and\n      Nabende, Peter  and\n      Chang, Ernie  and\n      Gwadabe, Tajuddeen  and\n      Sackey, Freshia  and\n      Dossou, Bonaventure F. P.  and\n      Emezue, Chris  and\n      Leong, Colin  and\n      Beukman, Michael  and\n      Muhammad, Shamsuddeen  and\n      Jarso, Guyo  and\n      Yousuf, Oreen  and\n      Niyongabo Rubungo, Andre  and\n      Hacheme, Gilles  and\n      Wairagala, Eric Peter  and\n      Nasir, Muhammad Umair  and\n      Ajibade, Benjamin  and\n      Ajayi, Tunde  and\n      Gitau, Yvonne  and\n      Abbott, Jade  and\n      Ahmed, Mohamed  and\n      Ochieng, Millicent  and\n      Aremu, Anuoluwapo  and\n      Ogayo, Perez  and\n      Mukiibi, Jonathan  and\n      Ouoba Kabore, Fatoumata  and\n      Kalipe, Godson  and\n      Mbaye, Derguene  and\n      Tapo, Allahsera Auguste  and\n      Memdjokam Koagne, Victoire  and\n      Munkoh-Buabeng, Edwin  and\n      Wagner, Valencia  and\n      Abdulmumin, Idris  and\n      Awokoya, Ayodele  and\n      Buzaaba, Happy  and\n      Sibanda, Blessing  and\n      Bukula, Andiswa  and\n      Manthalu, Sam\",\n    booktitle = \"Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies\",\n    month = jul,\n    year = \"2022\",\n    address = \"Seattle, United States\",\n    publisher = \"Association for Computational Linguistics\",\n    url = \"https://aclanthology.org/2022.naacl-main.223\",\n    doi = \"10.18653/v1/2022.naacl-main.223\",\n    pages = \"3053--3070\",\n    abstract = \"Recent advances in the pre-training for language models leverage large-scale datasets to create multilingual models. However, low-resource languages are mostly left out in these datasets. This is primarily because many widely spoken languages that are not well represented on the web and therefore excluded from the large-scale crawls for datasets. Furthermore, downstream users of these models are restricted to the selection of languages originally chosen for pre-training. This work investigates how to optimally leverage existing pre-trained models to create low-resource translation systems for 16 African languages. We focus on two questions: 1) How can pre-trained models be used for languages not included in the initial pretraining? and 2) How can the resulting translation models effectively transfer to new domains? To answer these questions, we create a novel African news corpus covering 16 languages, of which eight languages are not part of any existing evaluation dataset. We demonstrate that the most effective strategy for transferring both additional languages and additional domains is to leverage small quantities of high-quality translation data to fine-tune large pre-trained models.\",\n}\n", "homepage": "https://github.com/masakhane-io/lafand-mt", "license": "", "features": {"translation": {"languages": ["en", "twi"], "id": null, "_type": "Translation"}}, "post_processed": null, "supervised_keys": {"input": "en", "output": "twi"}, "task_templates": null, "builder_name": "mafand", "config_name": "en-twi", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 418381, "num_examples": 3337, "dataset_name": "mafand"}, "validation": {"name": "validation", "num_bytes": 214606, "num_examples": 1284, "dataset_name": "mafand"}, "test": {"name": "test", "num_bytes": 257776, "num_examples": 1500, "dataset_name": "mafand"}}, "download_checksums": {"https://raw.githubusercontent.com/masakhane-io/lafand-mt/main/data/json_files/en-twi/train.json": {"num_bytes": 916305, "checksum": "8c91cdfd9d4b8fc9b95e56f3f36a5357383762caf11b15a362c5661711e2ae7e"}, "https://raw.githubusercontent.com/masakhane-io/lafand-mt/main/data/json_files/en-twi/dev.json": {"num_bytes": 438911, "checksum": "d513e8ad49ebd28bb96529864bada539bef35ab222d3afd23d345db8b0c65419"}, "https://raw.githubusercontent.com/masakhane-io/lafand-mt/main/data/json_files/en-twi/test.json": {"num_bytes": 536213, "checksum": "853f6c797666d7d6aa27d26b13238ce79f3244d473caca5be6d0bca229453022"}}, "download_size": 1891429, "post_processing_size": null, "dataset_size": 890763, "size_in_bytes": 2782192}, "en-xho": {"description": "MAFAND-MT is the largest MT benchmark for African languages in the news domain, covering 21 languages. The languages covered are: \n- Amharic\n- Bambara\n- Ghomala\n- Ewe\n- Fon\n- Hausa\n- Igbo\n- Kinyarwanda\n- Luganda\n- Luo\n- Mossi\n- Nigerian-Pidgin\n- Chichewa\n- Shona\n- Swahili\n- Setswana\n- Twi\n- Wolof\n- Xhosa\n- Yoruba\n- Zulu\n\nThe train/validation/test sets are available for 16 languages, and validation/test set for amh, kin, nya, sna, and xho\n\nFor more details see https://aclanthology.org/2022.naacl-main.223/\n", "citation": "@inproceedings{adelani-etal-2022-thousand,\n    title = \"A Few Thousand Translations Go a Long Way! Leveraging Pre-trained Models for {A}frican News Translation\",\n    author = \"Adelani, David  and\n      Alabi, Jesujoba  and\n      Fan, Angela  and\n      Kreutzer, Julia  and\n      Shen, Xiaoyu  and\n      Reid, Machel  and\n      Ruiter, Dana  and\n      Klakow, Dietrich  and\n      Nabende, Peter  and\n      Chang, Ernie  and\n      Gwadabe, Tajuddeen  and\n      Sackey, Freshia  and\n      Dossou, Bonaventure F. P.  and\n      Emezue, Chris  and\n      Leong, Colin  and\n      Beukman, Michael  and\n      Muhammad, Shamsuddeen  and\n      Jarso, Guyo  and\n      Yousuf, Oreen  and\n      Niyongabo Rubungo, Andre  and\n      Hacheme, Gilles  and\n      Wairagala, Eric Peter  and\n      Nasir, Muhammad Umair  and\n      Ajibade, Benjamin  and\n      Ajayi, Tunde  and\n      Gitau, Yvonne  and\n      Abbott, Jade  and\n      Ahmed, Mohamed  and\n      Ochieng, Millicent  and\n      Aremu, Anuoluwapo  and\n      Ogayo, Perez  and\n      Mukiibi, Jonathan  and\n      Ouoba Kabore, Fatoumata  and\n      Kalipe, Godson  and\n      Mbaye, Derguene  and\n      Tapo, Allahsera Auguste  and\n      Memdjokam Koagne, Victoire  and\n      Munkoh-Buabeng, Edwin  and\n      Wagner, Valencia  and\n      Abdulmumin, Idris  and\n      Awokoya, Ayodele  and\n      Buzaaba, Happy  and\n      Sibanda, Blessing  and\n      Bukula, Andiswa  and\n      Manthalu, Sam\",\n    booktitle = \"Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies\",\n    month = jul,\n    year = \"2022\",\n    address = \"Seattle, United States\",\n    publisher = \"Association for Computational Linguistics\",\n    url = \"https://aclanthology.org/2022.naacl-main.223\",\n    doi = \"10.18653/v1/2022.naacl-main.223\",\n    pages = \"3053--3070\",\n    abstract = \"Recent advances in the pre-training for language models leverage large-scale datasets to create multilingual models. However, low-resource languages are mostly left out in these datasets. This is primarily because many widely spoken languages that are not well represented on the web and therefore excluded from the large-scale crawls for datasets. Furthermore, downstream users of these models are restricted to the selection of languages originally chosen for pre-training. This work investigates how to optimally leverage existing pre-trained models to create low-resource translation systems for 16 African languages. We focus on two questions: 1) How can pre-trained models be used for languages not included in the initial pretraining? and 2) How can the resulting translation models effectively transfer to new domains? To answer these questions, we create a novel African news corpus covering 16 languages, of which eight languages are not part of any existing evaluation dataset. We demonstrate that the most effective strategy for transferring both additional languages and additional domains is to leverage small quantities of high-quality translation data to fine-tune large pre-trained models.\",\n}\n", "homepage": "https://github.com/masakhane-io/lafand-mt", "license": "", "features": {"translation": {"languages": ["en", "xho"], "id": null, "_type": "Translation"}}, "post_processed": null, "supervised_keys": {"input": "en", "output": "xho"}, "task_templates": null, "builder_name": "mafand", "config_name": "en-xho", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"validation": {"name": "validation", "num_bytes": 66793, "num_examples": 486, "dataset_name": "mafand"}, "test": {"name": "test", "num_bytes": 136894, "num_examples": 1002, "dataset_name": "mafand"}}, "download_checksums": {"https://raw.githubusercontent.com/masakhane-io/lafand-mt/main/data/json_files/en-xho/dev.json": {"num_bytes": 151023, "checksum": "dd7282362c5aa8bc0e34b559b2ebb8436973d137e31789682ed2139e9a868822"}, "https://raw.githubusercontent.com/masakhane-io/lafand-mt/main/data/json_files/en-xho/test.json": {"num_bytes": 305836, "checksum": "fbee16af922728de4f5099a7a5d69d470562c34aa030101d7fb00a3281456324"}}, "download_size": 456859, "post_processing_size": null, "dataset_size": 203687, "size_in_bytes": 660546}, "en-yor": {"description": "MAFAND-MT is the largest MT benchmark for African languages in the news domain, covering 21 languages. The languages covered are: \n- Amharic\n- Bambara\n- Ghomala\n- Ewe\n- Fon\n- Hausa\n- Igbo\n- Kinyarwanda\n- Luganda\n- Luo\n- Mossi\n- Nigerian-Pidgin\n- Chichewa\n- Shona\n- Swahili\n- Setswana\n- Twi\n- Wolof\n- Xhosa\n- Yoruba\n- Zulu\n\nThe train/validation/test sets are available for 16 languages, and validation/test set for amh, kin, nya, sna, and xho\n\nFor more details see https://aclanthology.org/2022.naacl-main.223/\n", "citation": "@inproceedings{adelani-etal-2022-thousand,\n    title = \"A Few Thousand Translations Go a Long Way! Leveraging Pre-trained Models for {A}frican News Translation\",\n    author = \"Adelani, David  and\n      Alabi, Jesujoba  and\n      Fan, Angela  and\n      Kreutzer, Julia  and\n      Shen, Xiaoyu  and\n      Reid, Machel  and\n      Ruiter, Dana  and\n      Klakow, Dietrich  and\n      Nabende, Peter  and\n      Chang, Ernie  and\n      Gwadabe, Tajuddeen  and\n      Sackey, Freshia  and\n      Dossou, Bonaventure F. P.  and\n      Emezue, Chris  and\n      Leong, Colin  and\n      Beukman, Michael  and\n      Muhammad, Shamsuddeen  and\n      Jarso, Guyo  and\n      Yousuf, Oreen  and\n      Niyongabo Rubungo, Andre  and\n      Hacheme, Gilles  and\n      Wairagala, Eric Peter  and\n      Nasir, Muhammad Umair  and\n      Ajibade, Benjamin  and\n      Ajayi, Tunde  and\n      Gitau, Yvonne  and\n      Abbott, Jade  and\n      Ahmed, Mohamed  and\n      Ochieng, Millicent  and\n      Aremu, Anuoluwapo  and\n      Ogayo, Perez  and\n      Mukiibi, Jonathan  and\n      Ouoba Kabore, Fatoumata  and\n      Kalipe, Godson  and\n      Mbaye, Derguene  and\n      Tapo, Allahsera Auguste  and\n      Memdjokam Koagne, Victoire  and\n      Munkoh-Buabeng, Edwin  and\n      Wagner, Valencia  and\n      Abdulmumin, Idris  and\n      Awokoya, Ayodele  and\n      Buzaaba, Happy  and\n      Sibanda, Blessing  and\n      Bukula, Andiswa  and\n      Manthalu, Sam\",\n    booktitle = \"Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies\",\n    month = jul,\n    year = \"2022\",\n    address = \"Seattle, United States\",\n    publisher = \"Association for Computational Linguistics\",\n    url = \"https://aclanthology.org/2022.naacl-main.223\",\n    doi = \"10.18653/v1/2022.naacl-main.223\",\n    pages = \"3053--3070\",\n    abstract = \"Recent advances in the pre-training for language models leverage large-scale datasets to create multilingual models. However, low-resource languages are mostly left out in these datasets. This is primarily because many widely spoken languages that are not well represented on the web and therefore excluded from the large-scale crawls for datasets. Furthermore, downstream users of these models are restricted to the selection of languages originally chosen for pre-training. This work investigates how to optimally leverage existing pre-trained models to create low-resource translation systems for 16 African languages. We focus on two questions: 1) How can pre-trained models be used for languages not included in the initial pretraining? and 2) How can the resulting translation models effectively transfer to new domains? To answer these questions, we create a novel African news corpus covering 16 languages, of which eight languages are not part of any existing evaluation dataset. We demonstrate that the most effective strategy for transferring both additional languages and additional domains is to leverage small quantities of high-quality translation data to fine-tune large pre-trained models.\",\n}\n", "homepage": "https://github.com/masakhane-io/lafand-mt", "license": "", "features": {"translation": {"languages": ["en", "yor"], "id": null, "_type": "Translation"}}, "post_processed": null, "supervised_keys": {"input": "en", "output": "yor"}, "task_templates": null, "builder_name": "mafand", "config_name": "en-yor", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 879851, "num_examples": 6644, "dataset_name": "mafand"}, "validation": {"name": "validation", "num_bytes": 240226, "num_examples": 1544, "dataset_name": "mafand"}, "test": {"name": "test", "num_bytes": 220664, "num_examples": 1558, "dataset_name": "mafand"}}, "download_checksums": {"https://raw.githubusercontent.com/masakhane-io/lafand-mt/main/data/json_files/en-yor/train.json": {"num_bytes": 2320170, "checksum": "fd71596355a45a7eb36895270597769157c178f9a9d23ff62caf7b7f9e5369d0"}, "https://raw.githubusercontent.com/masakhane-io/lafand-mt/main/data/json_files/en-yor/dev.json": {"num_bytes": 628950, "checksum": "43c714c3fe502ac7959c5105db9d44bdeee3bf5e5e302a16a7eb932d4a5685a9"}, "https://raw.githubusercontent.com/masakhane-io/lafand-mt/main/data/json_files/en-yor/test.json": {"num_bytes": 577949, "checksum": "dd9c520c9beb27c2db4990fb94529f18ae245627b37e7be3df2d9627c5471bec"}}, "download_size": 3527069, "post_processing_size": null, "dataset_size": 1340741, "size_in_bytes": 4867810}, "en-zul": {"description": "MAFAND-MT is the largest MT benchmark for African languages in the news domain, covering 21 languages. The languages covered are: \n- Amharic\n- Bambara\n- Ghomala\n- Ewe\n- Fon\n- Hausa\n- Igbo\n- Kinyarwanda\n- Luganda\n- Luo\n- Mossi\n- Nigerian-Pidgin\n- Chichewa\n- Shona\n- Swahili\n- Setswana\n- Twi\n- Wolof\n- Xhosa\n- Yoruba\n- Zulu\n\nThe train/validation/test sets are available for 16 languages, and validation/test set for amh, kin, nya, sna, and xho\n\nFor more details see https://aclanthology.org/2022.naacl-main.223/\n", "citation": "@inproceedings{adelani-etal-2022-thousand,\n    title = \"A Few Thousand Translations Go a Long Way! Leveraging Pre-trained Models for {A}frican News Translation\",\n    author = \"Adelani, David  and\n      Alabi, Jesujoba  and\n      Fan, Angela  and\n      Kreutzer, Julia  and\n      Shen, Xiaoyu  and\n      Reid, Machel  and\n      Ruiter, Dana  and\n      Klakow, Dietrich  and\n      Nabende, Peter  and\n      Chang, Ernie  and\n      Gwadabe, Tajuddeen  and\n      Sackey, Freshia  and\n      Dossou, Bonaventure F. P.  and\n      Emezue, Chris  and\n      Leong, Colin  and\n      Beukman, Michael  and\n      Muhammad, Shamsuddeen  and\n      Jarso, Guyo  and\n      Yousuf, Oreen  and\n      Niyongabo Rubungo, Andre  and\n      Hacheme, Gilles  and\n      Wairagala, Eric Peter  and\n      Nasir, Muhammad Umair  and\n      Ajibade, Benjamin  and\n      Ajayi, Tunde  and\n      Gitau, Yvonne  and\n      Abbott, Jade  and\n      Ahmed, Mohamed  and\n      Ochieng, Millicent  and\n      Aremu, Anuoluwapo  and\n      Ogayo, Perez  and\n      Mukiibi, Jonathan  and\n      Ouoba Kabore, Fatoumata  and\n      Kalipe, Godson  and\n      Mbaye, Derguene  and\n      Tapo, Allahsera Auguste  and\n      Memdjokam Koagne, Victoire  and\n      Munkoh-Buabeng, Edwin  and\n      Wagner, Valencia  and\n      Abdulmumin, Idris  and\n      Awokoya, Ayodele  and\n      Buzaaba, Happy  and\n      Sibanda, Blessing  and\n      Bukula, Andiswa  and\n      Manthalu, Sam\",\n    booktitle = \"Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies\",\n    month = jul,\n    year = \"2022\",\n    address = \"Seattle, United States\",\n    publisher = \"Association for Computational Linguistics\",\n    url = \"https://aclanthology.org/2022.naacl-main.223\",\n    doi = \"10.18653/v1/2022.naacl-main.223\",\n    pages = \"3053--3070\",\n    abstract = \"Recent advances in the pre-training for language models leverage large-scale datasets to create multilingual models. However, low-resource languages are mostly left out in these datasets. This is primarily because many widely spoken languages that are not well represented on the web and therefore excluded from the large-scale crawls for datasets. Furthermore, downstream users of these models are restricted to the selection of languages originally chosen for pre-training. This work investigates how to optimally leverage existing pre-trained models to create low-resource translation systems for 16 African languages. We focus on two questions: 1) How can pre-trained models be used for languages not included in the initial pretraining? and 2) How can the resulting translation models effectively transfer to new domains? To answer these questions, we create a novel African news corpus covering 16 languages, of which eight languages are not part of any existing evaluation dataset. We demonstrate that the most effective strategy for transferring both additional languages and additional domains is to leverage small quantities of high-quality translation data to fine-tune large pre-trained models.\",\n}\n", "homepage": "https://github.com/masakhane-io/lafand-mt", "license": "", "features": {"translation": {"languages": ["en", "zul"], "id": null, "_type": "Translation"}}, "post_processed": null, "supervised_keys": {"input": "en", "output": "zul"}, "task_templates": null, "builder_name": "mafand", "config_name": "en-zul", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 526615, "num_examples": 3500, "dataset_name": "mafand"}, "validation": {"name": "validation", "num_bytes": 165136, "num_examples": 1239, "dataset_name": "mafand"}, "test": {"name": "test", "num_bytes": 144763, "num_examples": 998, "dataset_name": "mafand"}}, "download_checksums": {"https://raw.githubusercontent.com/masakhane-io/lafand-mt/main/data/json_files/en-zul/train.json": {"num_bytes": 1192112, "checksum": "ba948c35be642b189a870c07c6d9d47be949b9058cd674bcd1e20712d655c7f5"}, "https://raw.githubusercontent.com/masakhane-io/lafand-mt/main/data/json_files/en-zul/dev.json": {"num_bytes": 397973, "checksum": "8b558001679bf62a391c63ba65e5ecfb83f214a653300c471029ed348a587cde"}, "https://raw.githubusercontent.com/masakhane-io/lafand-mt/main/data/json_files/en-zul/test.json": {"num_bytes": 328644, "checksum": "4fe50b6bca98319cc46b469c532f2c15fa2d0a5e9b599703aabbd28c4b106159"}}, "download_size": 1918729, "post_processing_size": null, "dataset_size": 836514, "size_in_bytes": 2755243}, "fr-bam": {"description": "MAFAND-MT is the largest MT benchmark for African languages in the news domain, covering 21 languages. The languages covered are: \n- Amharic\n- Bambara\n- Ghomala\n- Ewe\n- Fon\n- Hausa\n- Igbo\n- Kinyarwanda\n- Luganda\n- Luo\n- Mossi\n- Nigerian-Pidgin\n- Chichewa\n- Shona\n- Swahili\n- Setswana\n- Twi\n- Wolof\n- Xhosa\n- Yoruba\n- Zulu\n\nThe train/validation/test sets are available for 16 languages, and validation/test set for amh, kin, nya, sna, and xho\n\nFor more details see https://aclanthology.org/2022.naacl-main.223/\n", "citation": "@inproceedings{adelani-etal-2022-thousand,\n    title = \"A Few Thousand Translations Go a Long Way! Leveraging Pre-trained Models for {A}frican News Translation\",\n    author = \"Adelani, David  and\n      Alabi, Jesujoba  and\n      Fan, Angela  and\n      Kreutzer, Julia  and\n      Shen, Xiaoyu  and\n      Reid, Machel  and\n      Ruiter, Dana  and\n      Klakow, Dietrich  and\n      Nabende, Peter  and\n      Chang, Ernie  and\n      Gwadabe, Tajuddeen  and\n      Sackey, Freshia  and\n      Dossou, Bonaventure F. P.  and\n      Emezue, Chris  and\n      Leong, Colin  and\n      Beukman, Michael  and\n      Muhammad, Shamsuddeen  and\n      Jarso, Guyo  and\n      Yousuf, Oreen  and\n      Niyongabo Rubungo, Andre  and\n      Hacheme, Gilles  and\n      Wairagala, Eric Peter  and\n      Nasir, Muhammad Umair  and\n      Ajibade, Benjamin  and\n      Ajayi, Tunde  and\n      Gitau, Yvonne  and\n      Abbott, Jade  and\n      Ahmed, Mohamed  and\n      Ochieng, Millicent  and\n      Aremu, Anuoluwapo  and\n      Ogayo, Perez  and\n      Mukiibi, Jonathan  and\n      Ouoba Kabore, Fatoumata  and\n      Kalipe, Godson  and\n      Mbaye, Derguene  and\n      Tapo, Allahsera Auguste  and\n      Memdjokam Koagne, Victoire  and\n      Munkoh-Buabeng, Edwin  and\n      Wagner, Valencia  and\n      Abdulmumin, Idris  and\n      Awokoya, Ayodele  and\n      Buzaaba, Happy  and\n      Sibanda, Blessing  and\n      Bukula, Andiswa  and\n      Manthalu, Sam\",\n    booktitle = \"Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies\",\n    month = jul,\n    year = \"2022\",\n    address = \"Seattle, United States\",\n    publisher = \"Association for Computational Linguistics\",\n    url = \"https://aclanthology.org/2022.naacl-main.223\",\n    doi = \"10.18653/v1/2022.naacl-main.223\",\n    pages = \"3053--3070\",\n    abstract = \"Recent advances in the pre-training for language models leverage large-scale datasets to create multilingual models. However, low-resource languages are mostly left out in these datasets. This is primarily because many widely spoken languages that are not well represented on the web and therefore excluded from the large-scale crawls for datasets. Furthermore, downstream users of these models are restricted to the selection of languages originally chosen for pre-training. This work investigates how to optimally leverage existing pre-trained models to create low-resource translation systems for 16 African languages. We focus on two questions: 1) How can pre-trained models be used for languages not included in the initial pretraining? and 2) How can the resulting translation models effectively transfer to new domains? To answer these questions, we create a novel African news corpus covering 16 languages, of which eight languages are not part of any existing evaluation dataset. We demonstrate that the most effective strategy for transferring both additional languages and additional domains is to leverage small quantities of high-quality translation data to fine-tune large pre-trained models.\",\n}\n", "homepage": "https://github.com/masakhane-io/lafand-mt", "license": "", "features": {"translation": {"languages": ["fr", "bam"], "id": null, "_type": "Translation"}}, "post_processed": null, "supervised_keys": {"input": "fr", "output": "bam"}, "task_templates": null, "builder_name": "mafand", "config_name": "fr-bam", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 443448, "num_examples": 3013, "dataset_name": "mafand"}, "validation": {"name": "validation", "num_bytes": 232961, "num_examples": 1500, "dataset_name": "mafand"}, "test": {"name": "test", "num_bytes": 218896, "num_examples": 1500, "dataset_name": "mafand"}}, "download_checksums": {"https://raw.githubusercontent.com/masakhane-io/lafand-mt/main/data/json_files/fr-bam/train.json": {"num_bytes": 931245, "checksum": "9df33ceeb5ab8c141792df9829c818261ee0d215998daf5127934eeb0e20b40a"}, "https://raw.githubusercontent.com/masakhane-io/lafand-mt/main/data/json_files/fr-bam/dev.json": {"num_bytes": 493456, "checksum": "63fcc13207ab3af36287bea11471d737294aaaa7bdfd94c738e152a5a23a3d0e"}, "https://raw.githubusercontent.com/masakhane-io/lafand-mt/main/data/json_files/fr-bam/test.json": {"num_bytes": 469529, "checksum": "6b12d3ad802bc4063df5231fe23b9457642ce172027ab9eafbd21aa119d462f0"}}, "download_size": 1894230, "post_processing_size": null, "dataset_size": 895305, "size_in_bytes": 2789535}, "fr-bbj": {"description": "MAFAND-MT is the largest MT benchmark for African languages in the news domain, covering 21 languages. The languages covered are: \n- Amharic\n- Bambara\n- Ghomala\n- Ewe\n- Fon\n- Hausa\n- Igbo\n- Kinyarwanda\n- Luganda\n- Luo\n- Mossi\n- Nigerian-Pidgin\n- Chichewa\n- Shona\n- Swahili\n- Setswana\n- Twi\n- Wolof\n- Xhosa\n- Yoruba\n- Zulu\n\nThe train/validation/test sets are available for 16 languages, and validation/test set for amh, kin, nya, sna, and xho\n\nFor more details see https://aclanthology.org/2022.naacl-main.223/\n", "citation": "@inproceedings{adelani-etal-2022-thousand,\n    title = \"A Few Thousand Translations Go a Long Way! Leveraging Pre-trained Models for {A}frican News Translation\",\n    author = \"Adelani, David  and\n      Alabi, Jesujoba  and\n      Fan, Angela  and\n      Kreutzer, Julia  and\n      Shen, Xiaoyu  and\n      Reid, Machel  and\n      Ruiter, Dana  and\n      Klakow, Dietrich  and\n      Nabende, Peter  and\n      Chang, Ernie  and\n      Gwadabe, Tajuddeen  and\n      Sackey, Freshia  and\n      Dossou, Bonaventure F. P.  and\n      Emezue, Chris  and\n      Leong, Colin  and\n      Beukman, Michael  and\n      Muhammad, Shamsuddeen  and\n      Jarso, Guyo  and\n      Yousuf, Oreen  and\n      Niyongabo Rubungo, Andre  and\n      Hacheme, Gilles  and\n      Wairagala, Eric Peter  and\n      Nasir, Muhammad Umair  and\n      Ajibade, Benjamin  and\n      Ajayi, Tunde  and\n      Gitau, Yvonne  and\n      Abbott, Jade  and\n      Ahmed, Mohamed  and\n      Ochieng, Millicent  and\n      Aremu, Anuoluwapo  and\n      Ogayo, Perez  and\n      Mukiibi, Jonathan  and\n      Ouoba Kabore, Fatoumata  and\n      Kalipe, Godson  and\n      Mbaye, Derguene  and\n      Tapo, Allahsera Auguste  and\n      Memdjokam Koagne, Victoire  and\n      Munkoh-Buabeng, Edwin  and\n      Wagner, Valencia  and\n      Abdulmumin, Idris  and\n      Awokoya, Ayodele  and\n      Buzaaba, Happy  and\n      Sibanda, Blessing  and\n      Bukula, Andiswa  and\n      Manthalu, Sam\",\n    booktitle = \"Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies\",\n    month = jul,\n    year = \"2022\",\n    address = \"Seattle, United States\",\n    publisher = \"Association for Computational Linguistics\",\n    url = \"https://aclanthology.org/2022.naacl-main.223\",\n    doi = \"10.18653/v1/2022.naacl-main.223\",\n    pages = \"3053--3070\",\n    abstract = \"Recent advances in the pre-training for language models leverage large-scale datasets to create multilingual models. However, low-resource languages are mostly left out in these datasets. This is primarily because many widely spoken languages that are not well represented on the web and therefore excluded from the large-scale crawls for datasets. Furthermore, downstream users of these models are restricted to the selection of languages originally chosen for pre-training. This work investigates how to optimally leverage existing pre-trained models to create low-resource translation systems for 16 African languages. We focus on two questions: 1) How can pre-trained models be used for languages not included in the initial pretraining? and 2) How can the resulting translation models effectively transfer to new domains? To answer these questions, we create a novel African news corpus covering 16 languages, of which eight languages are not part of any existing evaluation dataset. We demonstrate that the most effective strategy for transferring both additional languages and additional domains is to leverage small quantities of high-quality translation data to fine-tune large pre-trained models.\",\n}\n", "homepage": "https://github.com/masakhane-io/lafand-mt", "license": "", "features": {"translation": {"languages": ["fr", "bbj"], "id": null, "_type": "Translation"}}, "post_processed": null, "supervised_keys": {"input": "fr", "output": "bbj"}, "task_templates": null, "builder_name": "mafand", "config_name": "fr-bbj", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 411629, "num_examples": 2232, "dataset_name": "mafand"}, "validation": {"name": "validation", "num_bytes": 200234, "num_examples": 1133, "dataset_name": "mafand"}, "test": {"name": "test", "num_bytes": 287713, "num_examples": 1430, "dataset_name": "mafand"}}, "download_checksums": {"https://raw.githubusercontent.com/masakhane-io/lafand-mt/main/data/json_files/fr-bbj/train.json": {"num_bytes": 480929, "checksum": "5f5d52a1fb9595fb28e806e4d540a2445dae7ad4b8ae39573caaaf9eb146ecc8"}, "https://raw.githubusercontent.com/masakhane-io/lafand-mt/main/data/json_files/fr-bbj/dev.json": {"num_bytes": 235365, "checksum": "7ad65f0b71d5ccdc2a457790075930ab57b85c55e6ad0f70b83aaeb9de67a98b"}, "https://raw.githubusercontent.com/masakhane-io/lafand-mt/main/data/json_files/fr-bbj/test.json": {"num_bytes": 332183, "checksum": "db43fb00c0a3a963c25972872ff3a702bb7066e31d2eb8f4145ca311568575c5"}}, "download_size": 1048477, "post_processing_size": null, "dataset_size": 899576, "size_in_bytes": 1948053}, "fr-ewe": {"description": "MAFAND-MT is the largest MT benchmark for African languages in the news domain, covering 21 languages. The languages covered are: \n- Amharic\n- Bambara\n- Ghomala\n- Ewe\n- Fon\n- Hausa\n- Igbo\n- Kinyarwanda\n- Luganda\n- Luo\n- Mossi\n- Nigerian-Pidgin\n- Chichewa\n- Shona\n- Swahili\n- Setswana\n- Twi\n- Wolof\n- Xhosa\n- Yoruba\n- Zulu\n\nThe train/validation/test sets are available for 16 languages, and validation/test set for amh, kin, nya, sna, and xho\n\nFor more details see https://aclanthology.org/2022.naacl-main.223/\n", "citation": "@inproceedings{adelani-etal-2022-thousand,\n    title = \"A Few Thousand Translations Go a Long Way! Leveraging Pre-trained Models for {A}frican News Translation\",\n    author = \"Adelani, David  and\n      Alabi, Jesujoba  and\n      Fan, Angela  and\n      Kreutzer, Julia  and\n      Shen, Xiaoyu  and\n      Reid, Machel  and\n      Ruiter, Dana  and\n      Klakow, Dietrich  and\n      Nabende, Peter  and\n      Chang, Ernie  and\n      Gwadabe, Tajuddeen  and\n      Sackey, Freshia  and\n      Dossou, Bonaventure F. P.  and\n      Emezue, Chris  and\n      Leong, Colin  and\n      Beukman, Michael  and\n      Muhammad, Shamsuddeen  and\n      Jarso, Guyo  and\n      Yousuf, Oreen  and\n      Niyongabo Rubungo, Andre  and\n      Hacheme, Gilles  and\n      Wairagala, Eric Peter  and\n      Nasir, Muhammad Umair  and\n      Ajibade, Benjamin  and\n      Ajayi, Tunde  and\n      Gitau, Yvonne  and\n      Abbott, Jade  and\n      Ahmed, Mohamed  and\n      Ochieng, Millicent  and\n      Aremu, Anuoluwapo  and\n      Ogayo, Perez  and\n      Mukiibi, Jonathan  and\n      Ouoba Kabore, Fatoumata  and\n      Kalipe, Godson  and\n      Mbaye, Derguene  and\n      Tapo, Allahsera Auguste  and\n      Memdjokam Koagne, Victoire  and\n      Munkoh-Buabeng, Edwin  and\n      Wagner, Valencia  and\n      Abdulmumin, Idris  and\n      Awokoya, Ayodele  and\n      Buzaaba, Happy  and\n      Sibanda, Blessing  and\n      Bukula, Andiswa  and\n      Manthalu, Sam\",\n    booktitle = \"Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies\",\n    month = jul,\n    year = \"2022\",\n    address = \"Seattle, United States\",\n    publisher = \"Association for Computational Linguistics\",\n    url = \"https://aclanthology.org/2022.naacl-main.223\",\n    doi = \"10.18653/v1/2022.naacl-main.223\",\n    pages = \"3053--3070\",\n    abstract = \"Recent advances in the pre-training for language models leverage large-scale datasets to create multilingual models. However, low-resource languages are mostly left out in these datasets. This is primarily because many widely spoken languages that are not well represented on the web and therefore excluded from the large-scale crawls for datasets. Furthermore, downstream users of these models are restricted to the selection of languages originally chosen for pre-training. This work investigates how to optimally leverage existing pre-trained models to create low-resource translation systems for 16 African languages. We focus on two questions: 1) How can pre-trained models be used for languages not included in the initial pretraining? and 2) How can the resulting translation models effectively transfer to new domains? To answer these questions, we create a novel African news corpus covering 16 languages, of which eight languages are not part of any existing evaluation dataset. We demonstrate that the most effective strategy for transferring both additional languages and additional domains is to leverage small quantities of high-quality translation data to fine-tune large pre-trained models.\",\n}\n", "homepage": "https://github.com/masakhane-io/lafand-mt", "license": "", "features": {"translation": {"languages": ["fr", "ewe"], "id": null, "_type": "Translation"}}, "post_processed": null, "supervised_keys": {"input": "fr", "output": "ewe"}, "task_templates": null, "builder_name": "mafand", "config_name": "fr-ewe", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 287677, "num_examples": 2026, "dataset_name": "mafand"}, "validation": {"name": "validation", "num_bytes": 193047, "num_examples": 1414, "dataset_name": "mafand"}, "test": {"name": "test", "num_bytes": 212914, "num_examples": 1563, "dataset_name": "mafand"}}, "download_checksums": {"https://raw.githubusercontent.com/masakhane-io/lafand-mt/main/data/json_files/fr-ewe/train.json": {"num_bytes": 506055, "checksum": "3b24d35151586f5e6ffa50c68378f0b084ac6ebe7a15b5130665837c0b3ee492"}, "https://raw.githubusercontent.com/masakhane-io/lafand-mt/main/data/json_files/fr-ewe/dev.json": {"num_bytes": 379574, "checksum": "7d756d1da5e1ac23b3d68ce9a2ee9c1047c4fe462e51635b2fd204c31c57f454"}, "https://raw.githubusercontent.com/masakhane-io/lafand-mt/main/data/json_files/fr-ewe/test.json": {"num_bytes": 422559, "checksum": "e58164a8ce2fa1e9fae85a01db848f6f41b199ef1496af16eb163f876151e823"}}, "download_size": 1308188, "post_processing_size": null, "dataset_size": 693638, "size_in_bytes": 2001826}, "fr-fon": {"description": "MAFAND-MT is the largest MT benchmark for African languages in the news domain, covering 21 languages. The languages covered are: \n- Amharic\n- Bambara\n- Ghomala\n- Ewe\n- Fon\n- Hausa\n- Igbo\n- Kinyarwanda\n- Luganda\n- Luo\n- Mossi\n- Nigerian-Pidgin\n- Chichewa\n- Shona\n- Swahili\n- Setswana\n- Twi\n- Wolof\n- Xhosa\n- Yoruba\n- Zulu\n\nThe train/validation/test sets are available for 16 languages, and validation/test set for amh, kin, nya, sna, and xho\n\nFor more details see https://aclanthology.org/2022.naacl-main.223/\n", "citation": "@inproceedings{adelani-etal-2022-thousand,\n    title = \"A Few Thousand Translations Go a Long Way! Leveraging Pre-trained Models for {A}frican News Translation\",\n    author = \"Adelani, David  and\n      Alabi, Jesujoba  and\n      Fan, Angela  and\n      Kreutzer, Julia  and\n      Shen, Xiaoyu  and\n      Reid, Machel  and\n      Ruiter, Dana  and\n      Klakow, Dietrich  and\n      Nabende, Peter  and\n      Chang, Ernie  and\n      Gwadabe, Tajuddeen  and\n      Sackey, Freshia  and\n      Dossou, Bonaventure F. P.  and\n      Emezue, Chris  and\n      Leong, Colin  and\n      Beukman, Michael  and\n      Muhammad, Shamsuddeen  and\n      Jarso, Guyo  and\n      Yousuf, Oreen  and\n      Niyongabo Rubungo, Andre  and\n      Hacheme, Gilles  and\n      Wairagala, Eric Peter  and\n      Nasir, Muhammad Umair  and\n      Ajibade, Benjamin  and\n      Ajayi, Tunde  and\n      Gitau, Yvonne  and\n      Abbott, Jade  and\n      Ahmed, Mohamed  and\n      Ochieng, Millicent  and\n      Aremu, Anuoluwapo  and\n      Ogayo, Perez  and\n      Mukiibi, Jonathan  and\n      Ouoba Kabore, Fatoumata  and\n      Kalipe, Godson  and\n      Mbaye, Derguene  and\n      Tapo, Allahsera Auguste  and\n      Memdjokam Koagne, Victoire  and\n      Munkoh-Buabeng, Edwin  and\n      Wagner, Valencia  and\n      Abdulmumin, Idris  and\n      Awokoya, Ayodele  and\n      Buzaaba, Happy  and\n      Sibanda, Blessing  and\n      Bukula, Andiswa  and\n      Manthalu, Sam\",\n    booktitle = \"Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies\",\n    month = jul,\n    year = \"2022\",\n    address = \"Seattle, United States\",\n    publisher = \"Association for Computational Linguistics\",\n    url = \"https://aclanthology.org/2022.naacl-main.223\",\n    doi = \"10.18653/v1/2022.naacl-main.223\",\n    pages = \"3053--3070\",\n    abstract = \"Recent advances in the pre-training for language models leverage large-scale datasets to create multilingual models. However, low-resource languages are mostly left out in these datasets. This is primarily because many widely spoken languages that are not well represented on the web and therefore excluded from the large-scale crawls for datasets. Furthermore, downstream users of these models are restricted to the selection of languages originally chosen for pre-training. This work investigates how to optimally leverage existing pre-trained models to create low-resource translation systems for 16 African languages. We focus on two questions: 1) How can pre-trained models be used for languages not included in the initial pretraining? and 2) How can the resulting translation models effectively transfer to new domains? To answer these questions, we create a novel African news corpus covering 16 languages, of which eight languages are not part of any existing evaluation dataset. We demonstrate that the most effective strategy for transferring both additional languages and additional domains is to leverage small quantities of high-quality translation data to fine-tune large pre-trained models.\",\n}\n", "homepage": "https://github.com/masakhane-io/lafand-mt", "license": "", "features": {"translation": {"languages": ["fr", "fon"], "id": null, "_type": "Translation"}}, "post_processed": null, "supervised_keys": {"input": "fr", "output": "fon"}, "task_templates": null, "builder_name": "mafand", "config_name": "fr-fon", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 888908, "num_examples": 2637, "dataset_name": "mafand"}, "validation": {"name": "validation", "num_bytes": 435346, "num_examples": 1227, "dataset_name": "mafand"}, "test": {"name": "test", "num_bytes": 569141, "num_examples": 1579, "dataset_name": "mafand"}}, "download_checksums": {"https://raw.githubusercontent.com/masakhane-io/lafand-mt/main/data/json_files/fr-fon/train.json": {"num_bytes": 972727, "checksum": "5a0a5068a0d60320cfa6621c7543a4533ee8cb4b9efb40efd43e183d79dfbd9d"}, "https://raw.githubusercontent.com/masakhane-io/lafand-mt/main/data/json_files/fr-fon/dev.json": {"num_bytes": 473769, "checksum": "f2c0a281ab9b9cb1aa4ca4790261087c56a2276b8a649459d1aa5f40a95e3755"}, "https://raw.githubusercontent.com/masakhane-io/lafand-mt/main/data/json_files/fr-fon/test.json": {"num_bytes": 618374, "checksum": "aad1cda5ca84966ffc22d970a2f367665158424b5870c642f9eaaf4046ce19d3"}}, "download_size": 2064870, "post_processing_size": null, "dataset_size": 1893395, "size_in_bytes": 3958265}, "fr-mos": {"description": "MAFAND-MT is the largest MT benchmark for African languages in the news domain, covering 21 languages. The languages covered are: \n- Amharic\n- Bambara\n- Ghomala\n- Ewe\n- Fon\n- Hausa\n- Igbo\n- Kinyarwanda\n- Luganda\n- Luo\n- Mossi\n- Nigerian-Pidgin\n- Chichewa\n- Shona\n- Swahili\n- Setswana\n- Twi\n- Wolof\n- Xhosa\n- Yoruba\n- Zulu\n\nThe train/validation/test sets are available for 16 languages, and validation/test set for amh, kin, nya, sna, and xho\n\nFor more details see https://aclanthology.org/2022.naacl-main.223/\n", "citation": "@inproceedings{adelani-etal-2022-thousand,\n    title = \"A Few Thousand Translations Go a Long Way! Leveraging Pre-trained Models for {A}frican News Translation\",\n    author = \"Adelani, David  and\n      Alabi, Jesujoba  and\n      Fan, Angela  and\n      Kreutzer, Julia  and\n      Shen, Xiaoyu  and\n      Reid, Machel  and\n      Ruiter, Dana  and\n      Klakow, Dietrich  and\n      Nabende, Peter  and\n      Chang, Ernie  and\n      Gwadabe, Tajuddeen  and\n      Sackey, Freshia  and\n      Dossou, Bonaventure F. P.  and\n      Emezue, Chris  and\n      Leong, Colin  and\n      Beukman, Michael  and\n      Muhammad, Shamsuddeen  and\n      Jarso, Guyo  and\n      Yousuf, Oreen  and\n      Niyongabo Rubungo, Andre  and\n      Hacheme, Gilles  and\n      Wairagala, Eric Peter  and\n      Nasir, Muhammad Umair  and\n      Ajibade, Benjamin  and\n      Ajayi, Tunde  and\n      Gitau, Yvonne  and\n      Abbott, Jade  and\n      Ahmed, Mohamed  and\n      Ochieng, Millicent  and\n      Aremu, Anuoluwapo  and\n      Ogayo, Perez  and\n      Mukiibi, Jonathan  and\n      Ouoba Kabore, Fatoumata  and\n      Kalipe, Godson  and\n      Mbaye, Derguene  and\n      Tapo, Allahsera Auguste  and\n      Memdjokam Koagne, Victoire  and\n      Munkoh-Buabeng, Edwin  and\n      Wagner, Valencia  and\n      Abdulmumin, Idris  and\n      Awokoya, Ayodele  and\n      Buzaaba, Happy  and\n      Sibanda, Blessing  and\n      Bukula, Andiswa  and\n      Manthalu, Sam\",\n    booktitle = \"Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies\",\n    month = jul,\n    year = \"2022\",\n    address = \"Seattle, United States\",\n    publisher = \"Association for Computational Linguistics\",\n    url = \"https://aclanthology.org/2022.naacl-main.223\",\n    doi = \"10.18653/v1/2022.naacl-main.223\",\n    pages = \"3053--3070\",\n    abstract = \"Recent advances in the pre-training for language models leverage large-scale datasets to create multilingual models. However, low-resource languages are mostly left out in these datasets. This is primarily because many widely spoken languages that are not well represented on the web and therefore excluded from the large-scale crawls for datasets. Furthermore, downstream users of these models are restricted to the selection of languages originally chosen for pre-training. This work investigates how to optimally leverage existing pre-trained models to create low-resource translation systems for 16 African languages. We focus on two questions: 1) How can pre-trained models be used for languages not included in the initial pretraining? and 2) How can the resulting translation models effectively transfer to new domains? To answer these questions, we create a novel African news corpus covering 16 languages, of which eight languages are not part of any existing evaluation dataset. We demonstrate that the most effective strategy for transferring both additional languages and additional domains is to leverage small quantities of high-quality translation data to fine-tune large pre-trained models.\",\n}\n", "homepage": "https://github.com/masakhane-io/lafand-mt", "license": "", "features": {"translation": {"languages": ["fr", "mos"], "id": null, "_type": "Translation"}}, "post_processed": null, "supervised_keys": {"input": "fr", "output": "mos"}, "task_templates": null, "builder_name": "mafand", "config_name": "fr-mos", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 732837, "num_examples": 2287, "dataset_name": "mafand"}, "validation": {"name": "validation", "num_bytes": 489672, "num_examples": 1478, "dataset_name": "mafand"}, "test": {"name": "test", "num_bytes": 417424, "num_examples": 1574, "dataset_name": "mafand"}}, "download_checksums": {"https://raw.githubusercontent.com/masakhane-io/lafand-mt/main/data/json_files/fr-mos/train.json": {"num_bytes": 803909, "checksum": "a9eda967fdeb0026873bcdb2e8fc2c3c3d6ac693d9e86e41dc362915067a25bf"}, "https://raw.githubusercontent.com/masakhane-io/lafand-mt/main/data/json_files/fr-mos/dev.json": {"num_bytes": 535525, "checksum": "85a51897b535ba999b0c6b56b8cb602294485071960cad2557ee186109bef5dc"}, "https://raw.githubusercontent.com/masakhane-io/lafand-mt/main/data/json_files/fr-mos/test.json": {"num_bytes": 466305, "checksum": "ec5607df386fb26ac5060a65f3f6968cb89e0f123bc468c97f2cb23d8f0db6f1"}}, "download_size": 1805739, "post_processing_size": null, "dataset_size": 1639933, "size_in_bytes": 3445672}, "fr-wol": {"description": "MAFAND-MT is the largest MT benchmark for African languages in the news domain, covering 21 languages. The languages covered are: \n- Amharic\n- Bambara\n- Ghomala\n- Ewe\n- Fon\n- Hausa\n- Igbo\n- Kinyarwanda\n- Luganda\n- Luo\n- Mossi\n- Nigerian-Pidgin\n- Chichewa\n- Shona\n- Swahili\n- Setswana\n- Twi\n- Wolof\n- Xhosa\n- Yoruba\n- Zulu\n\nThe train/validation/test sets are available for 16 languages, and validation/test set for amh, kin, nya, sna, and xho\n\nFor more details see https://aclanthology.org/2022.naacl-main.223/\n", "citation": "@inproceedings{adelani-etal-2022-thousand,\n    title = \"A Few Thousand Translations Go a Long Way! Leveraging Pre-trained Models for {A}frican News Translation\",\n    author = \"Adelani, David  and\n      Alabi, Jesujoba  and\n      Fan, Angela  and\n      Kreutzer, Julia  and\n      Shen, Xiaoyu  and\n      Reid, Machel  and\n      Ruiter, Dana  and\n      Klakow, Dietrich  and\n      Nabende, Peter  and\n      Chang, Ernie  and\n      Gwadabe, Tajuddeen  and\n      Sackey, Freshia  and\n      Dossou, Bonaventure F. P.  and\n      Emezue, Chris  and\n      Leong, Colin  and\n      Beukman, Michael  and\n      Muhammad, Shamsuddeen  and\n      Jarso, Guyo  and\n      Yousuf, Oreen  and\n      Niyongabo Rubungo, Andre  and\n      Hacheme, Gilles  and\n      Wairagala, Eric Peter  and\n      Nasir, Muhammad Umair  and\n      Ajibade, Benjamin  and\n      Ajayi, Tunde  and\n      Gitau, Yvonne  and\n      Abbott, Jade  and\n      Ahmed, Mohamed  and\n      Ochieng, Millicent  and\n      Aremu, Anuoluwapo  and\n      Ogayo, Perez  and\n      Mukiibi, Jonathan  and\n      Ouoba Kabore, Fatoumata  and\n      Kalipe, Godson  and\n      Mbaye, Derguene  and\n      Tapo, Allahsera Auguste  and\n      Memdjokam Koagne, Victoire  and\n      Munkoh-Buabeng, Edwin  and\n      Wagner, Valencia  and\n      Abdulmumin, Idris  and\n      Awokoya, Ayodele  and\n      Buzaaba, Happy  and\n      Sibanda, Blessing  and\n      Bukula, Andiswa  and\n      Manthalu, Sam\",\n    booktitle = \"Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies\",\n    month = jul,\n    year = \"2022\",\n    address = \"Seattle, United States\",\n    publisher = \"Association for Computational Linguistics\",\n    url = \"https://aclanthology.org/2022.naacl-main.223\",\n    doi = \"10.18653/v1/2022.naacl-main.223\",\n    pages = \"3053--3070\",\n    abstract = \"Recent advances in the pre-training for language models leverage large-scale datasets to create multilingual models. However, low-resource languages are mostly left out in these datasets. This is primarily because many widely spoken languages that are not well represented on the web and therefore excluded from the large-scale crawls for datasets. Furthermore, downstream users of these models are restricted to the selection of languages originally chosen for pre-training. This work investigates how to optimally leverage existing pre-trained models to create low-resource translation systems for 16 African languages. We focus on two questions: 1) How can pre-trained models be used for languages not included in the initial pretraining? and 2) How can the resulting translation models effectively transfer to new domains? To answer these questions, we create a novel African news corpus covering 16 languages, of which eight languages are not part of any existing evaluation dataset. We demonstrate that the most effective strategy for transferring both additional languages and additional domains is to leverage small quantities of high-quality translation data to fine-tune large pre-trained models.\",\n}\n", "homepage": "https://github.com/masakhane-io/lafand-mt", "license": "", "features": {"translation": {"languages": ["fr", "wol"], "id": null, "_type": "Translation"}}, "post_processed": null, "supervised_keys": {"input": "fr", "output": "wol"}, "task_templates": null, "builder_name": "mafand", "config_name": "fr-wol", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 524623, "num_examples": 3360, "dataset_name": "mafand"}, "validation": {"name": "validation", "num_bytes": 239103, "num_examples": 1506, "dataset_name": "mafand"}, "test": {"name": "test", "num_bytes": 239236, "num_examples": 1500, "dataset_name": "mafand"}}, "download_checksums": {"https://raw.githubusercontent.com/masakhane-io/lafand-mt/main/data/json_files/fr-wol/train.json": {"num_bytes": 1053446, "checksum": "2d3ee90a53bbdea73a8fe9c95b1219918d88cd37b74efe868b866ca532ea7bb3"}, "https://raw.githubusercontent.com/masakhane-io/lafand-mt/main/data/json_files/fr-wol/dev.json": {"num_bytes": 476674, "checksum": "7f5ddc9f17fe05769e9a0f1c469e92652140106aa448a4536fa6916a6bcbe8d8"}, "https://raw.githubusercontent.com/masakhane-io/lafand-mt/main/data/json_files/fr-wol/test.json": {"num_bytes": 492765, "checksum": "4f84711bddc1ae6ad91ffc925104cec51e10de1e5cea6d57b448613f69e57fbe"}}, "download_size": 2022885, "post_processing_size": null, "dataset_size": 1002962, "size_in_bytes": 3025847}}