Commit
•
c6b0232
1
Parent(s):
a698c9b
update README.md
Browse files
README.md
CHANGED
@@ -1,19 +1,35 @@
|
|
1 |
-
IndicBARTSS is a multilingual, sequence-to-sequence pre-trained model focusing on Indic languages and English. It currently supports 11 Indian languages and is based on the mBART architecture. You can use IndicBARTSS model to build natural language generation applications for Indian languages by finetuning the model with supervised training data for tasks like machine translation, summarization, question generation, etc. Some salient features of the IndicBARTSS are:
|
2 |
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
9 |
|
10 |
-
|
11 |
|
12 |
-
For detailed documentation, look here: https://github.com/AI4Bharat/indic-bart/ and https://indicnlp.ai4bharat.org/indic-bart/
|
13 |
|
14 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
15 |
|
16 |
-
|
17 |
|
18 |
# Usage:
|
19 |
|
@@ -21,25 +37,25 @@ We used the <a href="https://indicnlp.ai4bharat.org/corpora/">IndicCorp</a> data
|
|
21 |
from transformers import MBartForConditionalGeneration, AutoModelForSeq2SeqLM
|
22 |
from transformers import AlbertTokenizer, AutoTokenizer
|
23 |
|
24 |
-
tokenizer =
|
25 |
|
26 |
-
# Or use tokenizer = AlbertTokenizer.from_pretrained("ai4bharat/
|
27 |
|
28 |
-
model = AutoModelForSeq2SeqLM.from_pretrained("ai4bharat/
|
29 |
|
30 |
-
# Or use model = MBartForConditionalGeneration.from_pretrained("ai4bharat/
|
31 |
|
32 |
# Some initial mapping
|
33 |
bos_id = tokenizer._convert_token_to_id_with_added_voc("<s>")
|
34 |
eos_id = tokenizer._convert_token_to_id_with_added_voc("</s>")
|
35 |
pad_id = tokenizer._convert_token_to_id_with_added_voc("<pad>")
|
36 |
-
# To get lang_id use any of ['<
|
37 |
|
38 |
-
# First tokenize the input and outputs. The format below is how
|
39 |
-
inp = tokenizer("I am a boy </s> <2en>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids # tensor([[ 466, 1981, 80, 25573, 64001, 64004]])
|
40 |
|
41 |
-
|
42 |
|
|
|
43 |
model_outputs=model(input_ids=inp, decoder_input_ids=out[:,0:-1], labels=out[:,1:])
|
44 |
|
45 |
# For loss
|
@@ -59,67 +75,27 @@ model_output=model.generate(inp, use_cache=True, num_beams=4, max_length=20, min
|
|
59 |
|
60 |
decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
|
61 |
|
62 |
-
print(decoded_output) #
|
63 |
-
|
64 |
-
# What if we mask?
|
65 |
-
|
66 |
-
inp = tokenizer("I am [MASK] </s> <2en>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids
|
67 |
-
|
68 |
-
model_output=model.generate(inp, use_cache=True, num_beams=4, max_length=20, min_length=1, early_stopping=True, pad_token_id=pad_id, bos_token_id=bos_id, eos_token_id=eos_id, decoder_start_token_id=tokenizer._convert_token_to_id_with_added_voc("<2en>"))
|
69 |
-
|
70 |
-
decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
|
71 |
-
|
72 |
-
print(decoded_output) # I am happy
|
73 |
-
|
74 |
-
inp = tokenizer("मैं [MASK] हूँ </s> <2hi>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids
|
75 |
-
|
76 |
-
model_output=model.generate(inp, use_cache=True, num_beams=4, max_length=20, min_length=1, early_stopping=True, pad_token_id=pad_id, bos_token_id=bos_id, eos_token_id=eos_id, decoder_start_token_id=tokenizer._convert_token_to_id_with_added_voc("<2en>"))
|
77 |
-
|
78 |
-
decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
|
79 |
-
|
80 |
-
print(decoded_output) # मैं जानता हूँ
|
81 |
|
82 |
-
|
|
|
|
|
83 |
|
84 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
85 |
|
86 |
-
decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
|
87 |
|
88 |
-
print(decoded_output) # मला ओळखलं पाहिजे
|
89 |
-
|
90 |
-
```
|
91 |
|
92 |
# Notes:
|
93 |
1. This is compatible with the latest version of transformers but was developed with version 4.3.2 so consider using 4.3.2 if possible.
|
94 |
2. While I have only shown how to get logits and loss and how to generate outputs, you can do pretty much everything the MBartForConditionalGeneration class can do as in https://huggingface.co/docs/transformers/model_doc/mbart#transformers.MBartForConditionalGeneration
|
95 |
3. Note that the tokenizer I have used is based on sentencepiece and not BPE. Therefore, I used the AlbertTokenizer class and not the MBartTokenizer class.
|
96 |
-
# Fine-tuning on a downstream task
|
97 |
-
|
98 |
-
1. If you wish to fine-tune this model, then you can do so using the <a href="https://github.com/prajdabre/yanmtt">YANMTT</a> toolkit, following the instructions <a href="https://github.com/AI4Bharat/indic-bart ">here</a>.
|
99 |
-
2. (Untested) Alternatively, you may use the official huggingface scripts for <a href="https://github.com/huggingface/transformers/tree/master/examples/pytorch/translation">translation</a> and <a href="https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization">summarization</a>.
|
100 |
-
|
101 |
-
# Contributors
|
102 |
-
<ul>
|
103 |
-
<li> Raj Dabre </li>
|
104 |
-
<li> Himani Shrotriya </li>
|
105 |
-
<li> Anoop Kunchukuttan </li>
|
106 |
-
<li> Ratish Puduppully </li>
|
107 |
-
<li> Mitesh M. Khapra </li>
|
108 |
-
<li> Pratyush Kumar </li>
|
109 |
-
</ul>
|
110 |
-
|
111 |
-
# Paper
|
112 |
-
If you use IndicBARTSS, please cite the following paper:
|
113 |
-
```
|
114 |
-
@misc{dabre2021indicbart,
|
115 |
-
title={IndicBART: A Pre-trained Model for Natural Language Generation of Indic Languages},
|
116 |
-
author={Raj Dabre and Himani Shrotriya and Anoop Kunchukuttan and Ratish Puduppully and Mitesh M. Khapra and Pratyush Kumar},
|
117 |
-
year={2021},
|
118 |
-
eprint={2109.02903},
|
119 |
-
archivePrefix={arXiv},
|
120 |
-
primaryClass={cs.CL}
|
121 |
-
}
|
122 |
-
```
|
123 |
|
124 |
-
# License
|
125 |
-
The model is available under the MIT License.
|
|
|
|
|
1 |
|
2 |
+
---
|
3 |
+
languages:
|
4 |
+
- bn
|
5 |
+
- gu
|
6 |
+
- hi
|
7 |
+
- mr
|
8 |
+
- pa
|
9 |
+
- ta
|
10 |
+
- te
|
11 |
+
|
12 |
+
tags:
|
13 |
+
- multilingual
|
14 |
+
- nlp
|
15 |
+
- indicnlp
|
16 |
+
|
17 |
+
widget:
|
18 |
+
- text:टेसा जॉवल का कहना है कि मृतकों और लापता लोगों के परिजनों की मदद के लिए एक केंद्र स्थापित किया जा रहा है. उन्होंने इस हादसे के तीन के बाद भी मृतकों की सूची जारी करने में हो रही देरी के बारे में स्पष्टीकरण देते हुए कहा है शवों की ठीक पहचान होना ज़रूरी है. पुलिस के अनुसार धमाकों में मारे गए लोगों की संख्या अब 49 हो गई है और अब भी 20 से ज़्यादा लोग लापता हैं. पुलिस के अनुसार लंदन पर हमले योजनाबद्ध तरीके से हुए और भूमिगत रेलगाड़ियों में विस्फोट तो 50 सैकेंड के भीतर हुए. पहचान की प्रक्रिया किंग्स क्रॉस स्टेशन के पास सुरंग में धमाके से क्षतिग्रस्त रेल कोचों में अब भी पड़े शवों के बारे में स्थिति साफ नहीं है और पुलिस ने आगाह किया है कि हताहतों की संख्या बढ़ सकती है. पुलिस, न्यायिक अधिकारियों, चिकित्सकों और अन्य विशेषज्ञों का एक आयोग बनाया गया है जिसकी देख-रेख में शवों की पहचान की प्रक्रिया पूरी होगी. महत्वपूर्ण है कि गुरुवार को लंदन में मृतकों के सम्मान में सार्वजनिक समारोह होगा जिसमें उन्हें श्रद्धाँजलि दी जाएगी और दो मिनट का मौन रखा जाएगा. पुलिस का कहना है कि वह इस्लामी चरमपंथी संगठन अबू हफ़्स अल-मासरी ब्रिगेड्स का इन धमाकों के बारे में किए गए दावे को गंभीरता से ले रही है. 'धमाके पचास सेकेंड में हुए' पुलिस के अनुसार लंदन पर हुए हमले योजनाबद्ध तरीके से किए गए थे. पुलिस के अनुसार भूमिगत रेलों में तीन बम अलग-अलग जगहों लगभग अचानक फटे थे. इसलिए पुलिस को संदेह है कि धमाकों में टाइमिंग उपकरणों का उपयोग किया गया होगा. यह भी तथ्य सामने आया है कि धमाकों में आधुनिक किस्म के विस्फोटकों का उपयोग किया गया था. पहले माना जा रहा था कि हमलों में देसी विस्फोटकों का इस्तेमाल किया गया होगा. पुलिस मुख्याल�� स्कॉटलैंड यार्ड में सहायक उपायुक्त ब्रायन पैडिक ने बताया कि भूमिगत रेलों में तीन धमाके 50 सेकेंड के अंतराल के भीतर हुए थे. धमाके गुरुवार सुबह आठ बजकर पचास मिनट पर हुए थे. लंदन अंडरग्राउंड से मिली विस्तृत तकनीकी सूचनाओं से यह तथ्य सामने आया है. इससे पहले बम धमाकों में अच्छे खासे अंतराल की बात की जा रही थी.</s> <2hi>
|
19 |
|
20 |
+
---
|
21 |
|
|
|
22 |
|
23 |
+
IndicBART-XLSum is a multilingual separate script [IndicBART](https://huggingface.co/ai4bharat/IndicBARTSS) based, sequence-to-sequence pre-trained model focusing on Indic languages. It currently supports 7 Indian languages and is based on the mBART architecture. Some salient features of the IndicBART-XLSum are:
|
24 |
+
|
25 |
+
<ul>
|
26 |
+
<li >Supported languages: Bengali, Gujarati, Hindi, Marathi, Punjabi, Tamil and Telugu. Not all of these languages are supported by mBART50 and mT5. </li>
|
27 |
+
<li >The model is much smaller than the mBART and mT5(-base) models, so less computationally expensive for finetuning and decoding. </li>
|
28 |
+
<li> Trained on Indic portion of [XLSum](https://huggingface.co/datasets/csebuetnlp/xlsum) corpora. </li>
|
29 |
+
<li> Each language is written in its own script, so you do not need to perform any script mapping to/from Devanagari. </li>
|
30 |
+
</ul>
|
31 |
|
32 |
+
You can read about IndicBARTSS in this <a href="https://arxiv.org/abs/2109.02903">paper</a>.
|
33 |
|
34 |
# Usage:
|
35 |
|
|
|
37 |
from transformers import MBartForConditionalGeneration, AutoModelForSeq2SeqLM
|
38 |
from transformers import AlbertTokenizer, AutoTokenizer
|
39 |
|
40 |
+
tokenizer = AlbertTokenizer.from_pretrained("ai4bharat/IndicBART-XLSum", do_lower_case=False, use_fast=False, keep_accents=True)
|
41 |
|
42 |
+
# Or use tokenizer = AlbertTokenizer.from_pretrained("ai4bharat/IndicBART-XLSum", do_lower_case=False, use_fast=False, keep_accents=True)
|
43 |
|
44 |
+
model = AutoModelForSeq2SeqLM.from_pretrained("ai4bharat/IndicBART-XLSum")
|
45 |
|
46 |
+
# Or use model = MBartForConditionalGeneration.from_pretrained("ai4bharat/IndicBART-XLSum")
|
47 |
|
48 |
# Some initial mapping
|
49 |
bos_id = tokenizer._convert_token_to_id_with_added_voc("<s>")
|
50 |
eos_id = tokenizer._convert_token_to_id_with_added_voc("</s>")
|
51 |
pad_id = tokenizer._convert_token_to_id_with_added_voc("<pad>")
|
52 |
+
# To get lang_id use any of ['<2bn>', '<2gu>', '<2hi>', '<2mr>', '<2pa>', '<2ta>', '<2te>']
|
53 |
|
54 |
+
# First tokenize the input and outputs. The format below is how IndicBART-XLSum was trained so the input should be "Sentence </s> <2xx>" where xx is the language code. Similarly, the output should be "<2yy> Sentence </s>".
|
|
|
55 |
|
56 |
+
inp = tokenizer("टेसा जॉवल का कहना है कि मृतकों और लापता लोगों के परिजनों की मदद के लिए एक केंद्र स्थापित किया जा रहा है. उन्होंने इस हादसे के तीन के बाद भी मृतकों की सूची जारी करने में हो रही देरी के बारे ���ें स्पष्टीकरण देते हुए कहा है शवों की ठीक पहचान होना ज़रूरी है. पुलिस के अनुसार धमाकों में मारे गए लोगों की संख्या अब 49 हो गई है और अब भी 20 से ज़्यादा लोग लापता हैं. पुलिस के अनुसार लंदन पर हमले योजनाबद्ध तरीके से हुए और भूमिगत रेलगाड़ियों में विस्फोट तो 50 सैकेंड के भीतर हुए. पहचान की प्रक्रिया किंग्स क्रॉस स्टेशन के पास सुरंग में धमाके से क्षतिग्रस्त रेल कोचों में अब भी पड़े शवों के बारे में स्थिति साफ नहीं है और पुलिस ने आगाह किया है कि हताहतों की संख्या बढ़ सकती है. पुलिस, न्यायिक अधिकारियों, चिकित्सकों और अन्य विशेषज्ञों का एक आयोग बनाया गया है जिसकी देख-रेख में शवों की पहचान की प्रक्रिया पूरी होगी. महत्वपूर्ण है कि गुरुवार को लंदन में मृतकों के सम्मान में सार्वजनिक समारोह होगा जिसमें उन्हें श्रद्धाँजलि दी जाएगी और दो मिनट का मौन रखा जाएगा. पुलिस का कहना है कि वह इस्लामी चरमपंथी संगठन अबू हफ़्स अल-मासरी ब्रिगेड्स का इन धमाकों के बारे में किए गए दावे को गंभीरता से ले रही है. 'धमाके पचास सेकेंड में हुए' पुलिस के अनुसार लंदन पर हुए हमले योजनाबद्ध तरीके से किए गए थे. पुलिस के अनुसार भूमिगत रेलों में तीन बम अलग-अलग जगहों लगभग अचानक फटे थे. इसलिए पुलिस को संदेह है कि धमाकों में टाइमिंग उपकरणों का उपयोग किया गया होगा. यह भी तथ्य सामने आया है कि धमाकों में आधुनिक किस्म के विस्फोटकों का उपयोग किया गया था. पहले माना जा रहा था कि हमलों में देसी विस्फोटकों का इस्तेमाल किया गया होगा. पुलिस मुख्यालय स्कॉटलैंड यार्ड में सहायक उपायुक्त ब्रायन पैडिक ने बताया कि भूमिगत रेलों में तीन धमाके 50 सेकेंड के अंतराल के भीतर हुए थे. धमाके गुरुवार सुबह आठ बजकर पचास मिनट पर हुए थे. लंदन अंडरग्राउंड से मिली विस्तृत तकनीकी सूचनाओं से यह तथ्य सामने आया है. इससे पहले बम धमाकों में अच्छे खासे अंतराल की बात की जा रही थी.</s> <2hi>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids
|
57 |
|
58 |
+
out = tokenizer("<2hi>परिजनों की मदद की ज़िम्मेदारी मंत्री पर </s>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids
|
59 |
model_outputs=model(input_ids=inp, decoder_input_ids=out[:,0:-1], labels=out[:,1:])
|
60 |
|
61 |
# For loss
|
|
|
75 |
|
76 |
decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
|
77 |
|
78 |
+
print(decoded_output) # लंदन धमाकों में मारे गए लोगों की सूची जारी
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
79 |
|
80 |
+
```
|
81 |
+
# Benchmarks
|
82 |
+
Scores on the `IndicBART-XLSum` test sets are as follows:
|
83 |
|
84 |
+
Language | Rouge-1 / Rouge-2 / Rouge-L
|
85 |
+
---------|----------------------------
|
86 |
+
bn | 0.172331 / 0.051777 / 0.160245
|
87 |
+
gu | 0.143240 / 0.039993 / 0.133981
|
88 |
+
hi | 0.220394 / 0.065464 / 0.198816
|
89 |
+
mr | 0.172568 / 0.062591 / 0.160403
|
90 |
+
pa | 0.218274 / 0.066087 / 0.192010
|
91 |
+
ta | 0.177317 / 0.058636 / 0.166324
|
92 |
+
te | 0.156386 / 0.041042 / 0.144179
|
93 |
+
average | 0.180073 / 0.055084 / 0.165137
|
94 |
|
|
|
95 |
|
|
|
|
|
|
|
96 |
|
97 |
# Notes:
|
98 |
1. This is compatible with the latest version of transformers but was developed with version 4.3.2 so consider using 4.3.2 if possible.
|
99 |
2. While I have only shown how to get logits and loss and how to generate outputs, you can do pretty much everything the MBartForConditionalGeneration class can do as in https://huggingface.co/docs/transformers/model_doc/mbart#transformers.MBartForConditionalGeneration
|
100 |
3. Note that the tokenizer I have used is based on sentencepiece and not BPE. Therefore, I used the AlbertTokenizer class and not the MBartTokenizer class.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
101 |
|
|
|
|