Datasets:
Tasks:
Text Classification
Modalities:
Text
Formats:
parquet
Sub-tasks:
natural-language-inference
Size:
10K - 100K
ArXiv:
License:
Commit
•
060c7aa
1
Parent(s):
aa38975
Update dataset card
Browse filesUpdate dataset card:
- Add license
- Add repository URL
- Update citation information
README.md
CHANGED
@@ -14,8 +14,7 @@ language:
|
|
14 |
- qu
|
15 |
- shp
|
16 |
- tar
|
17 |
-
license:
|
18 |
-
- unknown
|
19 |
multilinguality:
|
20 |
- multilingual
|
21 |
- translation
|
@@ -370,6 +369,7 @@ configs:
|
|
370 |
## Dataset Description
|
371 |
|
372 |
- **Homepage:** [Needs More Information]
|
|
|
373 |
- **Repository:** https://github.com/nala-cub/AmericasNLI
|
374 |
- **Paper:** https://arxiv.org/abs/2104.08726
|
375 |
- **Leaderboard:** [Needs More Information]
|
@@ -613,40 +613,38 @@ As per paragraph 3.1 of the [original paper](https://arxiv.org/abs/2104.08726).
|
|
613 |
|
614 |
### Licensing Information
|
615 |
|
616 |
-
|
617 |
|
618 |
### Citation Information
|
619 |
|
620 |
```
|
621 |
-
@
|
622 |
-
|
623 |
-
|
624 |
-
|
625 |
-
|
626 |
-
|
627 |
-
|
628 |
-
|
629 |
-
|
630 |
-
|
631 |
-
|
632 |
-
|
633 |
-
|
634 |
-
|
635 |
-
|
636 |
-
|
637 |
-
|
638 |
-
|
639 |
-
|
640 |
-
|
641 |
-
|
642 |
-
|
643 |
-
|
644 |
-
|
645 |
-
|
646 |
-
|
647 |
-
|
648 |
-
biburl = {https://dblp.org/rec/journals/corr/abs-2104-08726.bib},
|
649 |
-
bibsource = {dblp computer science bibliography, https://dblp.org}
|
650 |
}
|
651 |
```
|
652 |
|
|
|
14 |
- qu
|
15 |
- shp
|
16 |
- tar
|
17 |
+
license: cc-by-sa-4.0
|
|
|
18 |
multilinguality:
|
19 |
- multilingual
|
20 |
- translation
|
|
|
369 |
## Dataset Description
|
370 |
|
371 |
- **Homepage:** [Needs More Information]
|
372 |
+
- **Repository:** https://github.com/abteen/americasnli
|
373 |
- **Repository:** https://github.com/nala-cub/AmericasNLI
|
374 |
- **Paper:** https://arxiv.org/abs/2104.08726
|
375 |
- **Leaderboard:** [Needs More Information]
|
|
|
613 |
|
614 |
### Licensing Information
|
615 |
|
616 |
+
Creative Commons Attribution Share Alike 4.0 International: https://github.com/abteen/americasnli/blob/main/LICENSE.md
|
617 |
|
618 |
### Citation Information
|
619 |
|
620 |
```
|
621 |
+
@inproceedings{ebrahimi-etal-2022-americasnli,
|
622 |
+
title = "{A}mericas{NLI}: Evaluating Zero-shot Natural Language Understanding of Pretrained Multilingual Models in Truly Low-resource Languages",
|
623 |
+
author = "Ebrahimi, Abteen and
|
624 |
+
Mager, Manuel and
|
625 |
+
Oncevay, Arturo and
|
626 |
+
Chaudhary, Vishrav and
|
627 |
+
Chiruzzo, Luis and
|
628 |
+
Fan, Angela and
|
629 |
+
Ortega, John and
|
630 |
+
Ramos, Ricardo and
|
631 |
+
Rios, Annette and
|
632 |
+
Meza Ruiz, Ivan Vladimir and
|
633 |
+
Gim{\'e}nez-Lugo, Gustavo and
|
634 |
+
Mager, Elisabeth and
|
635 |
+
Neubig, Graham and
|
636 |
+
Palmer, Alexis and
|
637 |
+
Coto-Solano, Rolando and
|
638 |
+
Vu, Thang and
|
639 |
+
Kann, Katharina",
|
640 |
+
booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
|
641 |
+
month = may,
|
642 |
+
year = "2022",
|
643 |
+
address = "Dublin, Ireland",
|
644 |
+
publisher = "Association for Computational Linguistics",
|
645 |
+
url = "https://aclanthology.org/2022.acl-long.435",
|
646 |
+
pages = "6279--6299",
|
647 |
+
abstract = "Pretrained multilingual models are able to perform cross-lingual transfer in a zero-shot setting, even for languages unseen during pretraining. However, prior work evaluating performance on unseen languages has largely been limited to low-level, syntactic tasks, and it remains unclear if zero-shot learning of high-level, semantic tasks is possible for unseen languages. To explore this question, we present AmericasNLI, an extension of XNLI (Conneau et al., 2018) to 10 Indigenous languages of the Americas. We conduct experiments with XLM-R, testing multiple zero-shot and translation-based approaches. Additionally, we explore model adaptation via continued pretraining and provide an analysis of the dataset by considering hypothesis-only models. We find that XLM-R{'}s zero-shot performance is poor for all 10 languages, with an average performance of 38.48{\%}. Continued pretraining offers improvements, with an average accuracy of 43.85{\%}. Surprisingly, training on poorly translated data by far outperforms all other methods with an accuracy of 49.12{\%}.",
|
|
|
|
|
648 |
}
|
649 |
```
|
650 |
|