julien-c HF staff commited on
Commit
6bcb233
1 Parent(s): 00201db

Migrate model card from transformers-repo

Browse files

Read announcement at https://discuss.huggingface.co/t/announcement-all-model-cards-will-be-migrated-to-hf-co-model-repos/2755
Original file history: https://github.com/huggingface/transformers/commits/master/model_cards/mrm8488/mT5-small-finetuned-tydiqa-for-xqa/README.md

Files changed (1) hide show
  1. README.md +85 -0
README.md ADDED
@@ -0,0 +1,85 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: multilingual
3
+ datasets:
4
+ - tydiqa
5
+ pipeline_tag: question-answering
6
+ ---
7
+
8
+ # mT5-small fine-tuned on TyDiQA for multilingual QA 🗺📖❓
9
+ [Google's mT5-small](https://huggingface.co/google/mt5-small) fine-tuned on [TyDi QA](https://huggingface.co/nlp/viewer/?dataset=tydiqa&config=secondary_task) (secondary task) for **multingual Q&A** downstream task.
10
+
11
+ ## Details of mT5
12
+
13
+ [Google's mT5](https://github.com/google-research/multilingual-t5)
14
+
15
+ mT5 is pretrained on the [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) corpus, covering 101 languages:
16
+
17
+ Afrikaans, Albanian, Amharic, Arabic, Armenian, Azerbaijani, Basque, Belarusian, Bengali, Bulgarian, Burmese, Catalan, Cebuano, Chichewa, Chinese, Corsican, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Haitian Creole, Hausa, Hawaiian, Hebrew, Hindi, Hmong, Hungarian, Icelandic, Igbo, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish, Kyrgyz, Lao, Latin, Latvian, Lithuanian, Luxembourgish, Macedonian, Malagasy, Malay, Malayalam, Maltese, Maori, Marathi, Mongolian, Nepali, Norwegian, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Samoan, Scottish Gaelic, Serbian, Shona, Sindhi, Sinhala, Slovak, Slovenian, Somali, Sotho, Spanish, Sundanese, Swahili, Swedish, Tajik, Tamil, Telugu, Thai, Turkish, Ukrainian, Urdu, Uzbek, Vietnamese, Welsh, West Frisian, Xhosa, Yiddish, Yoruba, Zulu.
18
+
19
+ **Note**: mT5 was only pre-trained on mC4 excluding any supervised training. Therefore, this model has to be fine-tuned before it is useable on a downstream task.
20
+
21
+ Pretraining Dataset: [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual)
22
+
23
+ Other Community Checkpoints: [here](https://huggingface.co/models?search=mt5)
24
+
25
+ Paper: [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934)
26
+
27
+ Authors: *Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel*
28
+
29
+
30
+ ## Details of the dataset 📚
31
+
32
+ **TyDi QA** is a question answering dataset covering 11 typologically diverse languages with 204K question-answer pairs. The languages of TyDi QA are diverse with regard to their typology -- the set of linguistic features that each language expresses -- such that we expect models performing well on this set to generalize across a large number of the languages in the world. It contains language phenomena that would not be found in English-only corpora. To provide a realistic information-seeking task and avoid priming effects, questions are written by people who want to know the answer, but don’t know the answer yet, (unlike SQuAD and its descendents) and the data is collected directly in each language without the use of translation (unlike MLQA and XQuAD).
33
+
34
+ | Dataset | Task | Split | # samples |
35
+ | -------- | ----- |------| --------- |
36
+ | TyDi QA | GoldP | train| 49881 |
37
+ | TyDi QA | GoldP | valid| 5077 |
38
+
39
+
40
+
41
+ ## Results on validation dataset 📝
42
+
43
+ | Metric | # Value |
44
+ | ------ | --------- |
45
+ | **EM** | **41.65** |
46
+
47
+
48
+
49
+ ## Model in Action 🚀
50
+
51
+ ```python
52
+ from transformers import AutoModelForCausalLM, AutoTokenizer
53
+ import torch
54
+ device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
55
+ tokenizer = AutoTokenizer.from_pretrained("mrm8488/mT5-small-finetuned-tydiqa-for-xqa")
56
+ model = AutoModelForCausalLM.from_pretrained("mrm8488/mT5-small-finetuned-tydiqa-for-xqa").to(device)
57
+
58
+ def get_response(question, context, max_length=32):
59
+ input_text = 'question: %s context: %s' % (question, context)
60
+ features = tokenizer([input_text], return_tensors='pt')
61
+
62
+ output = model.generate(input_ids=features['input_ids'].to(device),
63
+ attention_mask=features['attention_mask'].to(device),
64
+ max_length=max_length)
65
+
66
+ return tokenizer.decode(output[0])
67
+
68
+ # Some examples in different languages
69
+
70
+ context = 'HuggingFace won the best Demo paper at EMNLP2020.'
71
+ question = 'What won HuggingFace?'
72
+ get_response(question, context)
73
+
74
+ context = 'HuggingFace ganó la mejor demostración con su paper en la EMNLP2020.'
75
+ question = 'Qué ganó HuggingFace?'
76
+ get_response(question, context)
77
+
78
+ context = 'HuggingFace выиграл лучшую демонстрационную работу на EMNLP2020.'
79
+ question = 'Что победило в HuggingFace?'
80
+ get_response(question, context)
81
+ ```
82
+
83
+ > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/)
84
+
85
+ > Made with <span style="color: #e25555;">&hearts;</span> in Spain