Datasets:

Size Categories:
unknown
Language Creators:
expert-generated
Annotations Creators:
expert-generated
Source Datasets:
extended|xnli
ArXiv:
Tags:
License:
albertvillanova HF staff commited on
Commit
1f3f4fa
1 Parent(s): aa38975

Update dataset card (#4)

Browse files

- Update dataset card (060c7aaaf74133a5ebaa791d64bfc5d5681eba59)

Files changed (1) hide show
  1. README.md +30 -32
README.md CHANGED
@@ -14,8 +14,7 @@ language:
14
  - qu
15
  - shp
16
  - tar
17
- license:
18
- - unknown
19
  multilinguality:
20
  - multilingual
21
  - translation
@@ -370,6 +369,7 @@ configs:
370
  ## Dataset Description
371
 
372
  - **Homepage:** [Needs More Information]
 
373
  - **Repository:** https://github.com/nala-cub/AmericasNLI
374
  - **Paper:** https://arxiv.org/abs/2104.08726
375
  - **Leaderboard:** [Needs More Information]
@@ -613,40 +613,38 @@ As per paragraph 3.1 of the [original paper](https://arxiv.org/abs/2104.08726).
613
 
614
  ### Licensing Information
615
 
616
- [Needs More Information]
617
 
618
  ### Citation Information
619
 
620
  ```
621
- @article{DBLP:journals/corr/abs-2104-08726,
622
- author = {Abteen Ebrahimi and
623
- Manuel Mager and
624
- Arturo Oncevay and
625
- Vishrav Chaudhary and
626
- Luis Chiruzzo and
627
- Angela Fan and
628
- John Ortega and
629
- Ricardo Ramos and
630
- Annette Rios and
631
- Ivan Vladimir and
632
- Gustavo A. Gim{\'{e}}nez{-}Lugo and
633
- Elisabeth Mager and
634
- Graham Neubig and
635
- Alexis Palmer and
636
- Rolando A. Coto Solano and
637
- Ngoc Thang Vu and
638
- Katharina Kann},
639
- title = {AmericasNLI: Evaluating Zero-shot Natural Language Understanding of
640
- Pretrained Multilingual Models in Truly Low-resource Languages},
641
- journal = {CoRR},
642
- volume = {abs/2104.08726},
643
- year = {2021},
644
- url = {https://arxiv.org/abs/2104.08726},
645
- eprinttype = {arXiv},
646
- eprint = {2104.08726},
647
- timestamp = {Mon, 26 Apr 2021 17:25:10 +0200},
648
- biburl = {https://dblp.org/rec/journals/corr/abs-2104-08726.bib},
649
- bibsource = {dblp computer science bibliography, https://dblp.org}
650
  }
651
  ```
652
 
14
  - qu
15
  - shp
16
  - tar
17
+ license: cc-by-sa-4.0
 
18
  multilinguality:
19
  - multilingual
20
  - translation
369
  ## Dataset Description
370
 
371
  - **Homepage:** [Needs More Information]
372
+ - **Repository:** https://github.com/abteen/americasnli
373
  - **Repository:** https://github.com/nala-cub/AmericasNLI
374
  - **Paper:** https://arxiv.org/abs/2104.08726
375
  - **Leaderboard:** [Needs More Information]
613
 
614
  ### Licensing Information
615
 
616
+ Creative Commons Attribution Share Alike 4.0 International: https://github.com/abteen/americasnli/blob/main/LICENSE.md
617
 
618
  ### Citation Information
619
 
620
  ```
621
+ @inproceedings{ebrahimi-etal-2022-americasnli,
622
+ title = "{A}mericas{NLI}: Evaluating Zero-shot Natural Language Understanding of Pretrained Multilingual Models in Truly Low-resource Languages",
623
+ author = "Ebrahimi, Abteen and
624
+ Mager, Manuel and
625
+ Oncevay, Arturo and
626
+ Chaudhary, Vishrav and
627
+ Chiruzzo, Luis and
628
+ Fan, Angela and
629
+ Ortega, John and
630
+ Ramos, Ricardo and
631
+ Rios, Annette and
632
+ Meza Ruiz, Ivan Vladimir and
633
+ Gim{\'e}nez-Lugo, Gustavo and
634
+ Mager, Elisabeth and
635
+ Neubig, Graham and
636
+ Palmer, Alexis and
637
+ Coto-Solano, Rolando and
638
+ Vu, Thang and
639
+ Kann, Katharina",
640
+ booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
641
+ month = may,
642
+ year = "2022",
643
+ address = "Dublin, Ireland",
644
+ publisher = "Association for Computational Linguistics",
645
+ url = "https://aclanthology.org/2022.acl-long.435",
646
+ pages = "6279--6299",
647
+ abstract = "Pretrained multilingual models are able to perform cross-lingual transfer in a zero-shot setting, even for languages unseen during pretraining. However, prior work evaluating performance on unseen languages has largely been limited to low-level, syntactic tasks, and it remains unclear if zero-shot learning of high-level, semantic tasks is possible for unseen languages. To explore this question, we present AmericasNLI, an extension of XNLI (Conneau et al., 2018) to 10 Indigenous languages of the Americas. We conduct experiments with XLM-R, testing multiple zero-shot and translation-based approaches. Additionally, we explore model adaptation via continued pretraining and provide an analysis of the dataset by considering hypothesis-only models. We find that XLM-R{'}s zero-shot performance is poor for all 10 languages, with an average performance of 38.48{\%}. Continued pretraining offers improvements, with an average accuracy of 43.85{\%}. Surprisingly, training on poorly translated data by far outperforms all other methods with an accuracy of 49.12{\%}.",
 
 
648
  }
649
  ```
650