prajdabre commited on
Commit
5da3059
1 Parent(s): 16811c3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +44 -4
README.md CHANGED
@@ -1,6 +1,20 @@
1
- This is the IndicBART model. For detailed documentation look here: https://indicnlp.ai4bharat.org/indic-bart/ and https://github.com/AI4Bharat/indic-bart/
2
 
3
- Usage:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
 
5
  ```
6
  from transformers import MBartForConditionalGeneration, AutoModelForSeq2SeqLM
@@ -57,8 +71,34 @@ decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean
57
  print(decoded_output) # I am happy
58
  ```
59
 
60
- Notes:
61
  1. This is compatible with the latest version of transformers but was developed with version 4.3.2 so consider using 4.3.2 if possible.
62
  2. While I have only shown how to let logits and loss and how to generate outputs, you can do pretty much everything the MBartForConditionalGeneration class can do as in https://huggingface.co/docs/transformers/model_doc/mbart#transformers.MBartForConditionalGeneration
63
  3. If you wish to fine-tune this model, then you can do so using the toolkit YANMTT following the instructions here: https://github.com/AI4Bharat/indic-bart
64
- 4. Note that the tokenizer I have used is based on sentencepiece and not BPE. Therefore, I used the AlbertTokenizer class and not the MBartTokenizer class.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ IndicBART is a multilingual, sequence-to-sequence pre-trained model focusing on Indic languages and English. It currently supports 11 Indian languages and is based on the mBART architecture. You can use IndicBART model to build natural language generation applications for Indian languages by finetuning the model with supervised training data for tasks like machine translation, summarization, question generation, etc. Some salient features of the IndicBART are:
2
 
3
+ <ul>
4
+ <li >Supported languages: Assamese, Bengali, Gujarati, Hindi, Marathi, Odiya, Punjabi, Kannada, Malayalam, Tamil, Telugu and English. Not all of these languages are supported by mBART50 and mT5. </li>
5
+ <li >The model is much smaller than the mBART and mT5(-base) models, so less computationally expensive for finetuning and decoding. </li>
6
+ <li> Trained on large Indic language corpora (452 million sentences and 9 billion tokens) which also includes Indian English content. </li>
7
+ </ul>
8
+
9
+ You can read more about IndicBART in this paper: https://arxiv.org/abs/2109.02903
10
+
11
+ For detailed documentation, look here: https://github.com/AI4Bharat/indic-bart/ and https://indicnlp.ai4bharat.org/indic-bart/
12
+
13
+ # Pre-training corpus
14
+
15
+ We used the IndicCorp data spanning 12 languages with 452 million sentences (9 billion tokens). The model was trained using the text-infilling objective used in mBART.
16
+
17
+ # Usage:
18
 
19
  ```
20
  from transformers import MBartForConditionalGeneration, AutoModelForSeq2SeqLM
 
71
  print(decoded_output) # I am happy
72
  ```
73
 
74
+ # Notes:
75
  1. This is compatible with the latest version of transformers but was developed with version 4.3.2 so consider using 4.3.2 if possible.
76
  2. While I have only shown how to let logits and loss and how to generate outputs, you can do pretty much everything the MBartForConditionalGeneration class can do as in https://huggingface.co/docs/transformers/model_doc/mbart#transformers.MBartForConditionalGeneration
77
  3. If you wish to fine-tune this model, then you can do so using the toolkit YANMTT following the instructions here: https://github.com/AI4Bharat/indic-bart
78
+ 4. Note that the tokenizer I have used is based on sentencepiece and not BPE. Therefore, I used the AlbertTokenizer class and not the MBartTokenizer class.
79
+
80
+ # Contributors
81
+ <ul>
82
+ <li> Raj Dabre </li>
83
+ <li> Himani Shrotriya </li>
84
+ <li> Anoop Kunchukuttan </li>
85
+ <li> Ratish Puduppully </li>
86
+ <li> Mitesh M. Khapra </li>
87
+ <li> Pratyush Kumar </li>
88
+ </ul>
89
+
90
+ # Paper
91
+ If you use IndicBART, please cite the following paper:
92
+ ```
93
+ @misc{dabre2021indicbart,
94
+ title={IndicBART: A Pre-trained Model for Natural Language Generation of Indic Languages},
95
+ author={Raj Dabre and Himani Shrotriya and Anoop Kunchukuttan and Ratish Puduppully and Mitesh M. Khapra and Pratyush Kumar},
96
+ year={2021},
97
+ eprint={2109.02903},
98
+ archivePrefix={arXiv},
99
+ primaryClass={cs.CL}
100
+ }
101
+ ```
102
+
103
+ #License
104
+ The model is available under the MIT License.