Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
License:
samuelcahyawijaya commited on
Commit
7e7c77b
1 Parent(s): fc65ca8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +328 -0
README.md CHANGED
@@ -1,3 +1,331 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ dataset_info:
4
+ features:
5
+ - name: dataset_name
6
+ dtype: string
7
+ - name: subset_name
8
+ dtype: string
9
+ - name: prompt_id
10
+ dtype: string
11
+ - name: template_name
12
+ dtype: string
13
+ - name: dataset_key
14
+ dtype: string
15
+ - name: input
16
+ dtype: string
17
+ - name: output
18
+ dtype: string
19
+ splits:
20
+ - name: train
21
+ num_bytes: 11180104753
22
+ num_examples: 12810390
23
+ download_size: 2116747189
24
+ dataset_size: 11180104753
25
+ configs:
26
+ - config_name: default
27
+ data_files:
28
+ - split: train
29
+ path: data/train-*
30
  ---
31
+
32
+ # **Cendol: Open Instruction-tuned Generative Large Language Models for Indonesian Languages**
33
+ Cendol is an open-source collection of fine-tuned generative large language models in Indonesian languages covering decoder-only and encoder-decoder transformer model architectures ranging in scale from 300 million to 13 billion parameters.
34
+
35
+ This is the repository for the **NusaT2T v2 - Task-Specific Prompts**. Links to models and other datasets can be found below.
36
+
37
+ ## Model Details
38
+ *Note*: Use of Cendol is licensed under the [Apache 2.0 license](https://choosealicense.com/licenses/apache-2.0/)
39
+
40
+ **Overview**
41
+
42
+ IndoNLP developed and publicly released the Cendol family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 560 million to 13 billion parameters.
43
+
44
+ Cendol models cover two instruction-tuned versions:
45
+ 1. Cendol-Instruct that is instruction-tuned on tasks-specific NLP data such as sentiment analysis, topic modeling, machine translation, summarization, question answering, paraphrasing, etc
46
+ 2. Cendol-Chat that is continuously instruction-tuned from **Cendol-Instruct** on general knowledge and human-centric prompts.
47
+
48
+ Both Cendol-Instruct and Cendol-Chat are designed for a single-turn conversation. Cendol outperforms open-source multilingual and region-specific LLMs on most benchmarks we tested by a huge margin, with the smaller version (<1B parameters) of Cendol being highly competitive with other LLMs with 7B parameters.
49
+
50
+ **Model Developers**: IndoNLP
51
+
52
+ **Variations**
53
+
54
+ Cendol comes from 2 base models (mT5 and LLaMA-2) each with a range of parameter sizes. mT5-based Cendol comes with 300M (mT5-small), 580M (mT5-base), 1.2B (mT5-large), 3.7B (mT5-XL), and 13B (mT5-XXL) models, while LLaMA-2-based Cendol comes with 7B (LLaMA2-7B) and 13B (LLaMA2-13B) models. Both variants come with Cendol-Instruct and Cendol-Chat variations. All 13B parameter models are tuned with LoRA, while others are fully fine-tuned.
55
+
56
+ In our paper, we showcase that adapting region-specific LLMs using LoRA is ineffective and inefficient, i.e., the 13B (mT5-XXL) Cendol models perform slightly worse than the 1.2B (mT5-large) Cendol models, while having 3x slower training time and 4x slower inference time. As an alternative to LoRA, we showcase the benefits of vocabulary substitution as an effective and efficient strategy for region-specific adaptation, where we improve the efficiency by **11.50%** and **18.71%** for training and inference times, respectively.
57
+ In terms of evaluation performance, we also showcase that the model performs on par with the Cendol model trained with the original vocabulary. We also release the Indonesian vocabulary-adapted model denoted as `Indonesian-Vocab Instruct`.
58
+
59
+ **Input-Output**: Models input and output are text only.
60
+
61
+ **Model Architecture**
62
+
63
+ |Model|Training Data|Params|Tuning Strategy|LR|
64
+ |---|---|---|---|---|
65
+ |[Cendol mT5-small Instruct](https://huggingface.co/indonlp/cendol-mt5-small-inst)|[NusaT2T v1](https://huggingface.co/datasets/indonlp/nusa_t2t_v1)|300M|Fully-Finetuned|3.0 x 10<sup>-4</sup>|
66
+ |[Cendol mT5-base Instruct](https://huggingface.co/indonlp/cendol-mt5-base-inst)|[NusaT2T v1](https://huggingface.co/datasets/indonlp/nusa_t2t_v1)|580M|Fully-Finetuned|3.0 x 10<sup>-4</sup>|
67
+ |[Cendol mT5-large Instruct](https://huggingface.co/indonlp/cendol-mt5-large-inst)|[NusaT2T v1](https://huggingface.co/datasets/indonlp/nusa_t2t_v1)|1.2B|Fully-Finetuned|3.0 x 10<sup>-4</sup>|
68
+ |[Cendol mT5-xl Instruct](https://huggingface.co/indonlp/cendol-mt5-xl-inst)|[NusaT2T v1](https://huggingface.co/datasets/indonlp/nusa_t2t_v1)|3.7B|Fully-Finetuned|3.0 x 10<sup>-4</sup>|
69
+ |[Cendol mT5-xxl Instruct](https://huggingface.co/indonlp/cendol-mt5-xxl-merged-inst)|[NusaT2T v1](https://huggingface.co/datasets/indonlp/nusa_t2t_v1)|13B|LoRA|2.0 x 10<sup>-4</sup>|
70
+ |[Cendol LLaMA-2 (7B) Instruct](https://huggingface.co/indonlp/cendol-llama2-7b-inst)|[NusaT2T v1](https://huggingface.co/datasets/indonlp/nusa_t2t_v1)|7B|Fully-Finetuned|2.0 x 10<sup>-5</sup>|
71
+ |[Cendol LLaMA-2 (7B) Indonesian-Vocab Instruct](https://huggingface.co/indonlp/cendol-llama2-ind-vocab-inst)|[NusaT2T v1](https://huggingface.co/datasets/indonlp/nusa_t2t_v1)|7B|Fully-Finetuned|2.0 x 10<sup>-5</sup>|
72
+ |[Cendol LLaMA-2 (13B) Instruct](https://huggingface.co/indonlp/cendol-llama2-13b-merged-inst)|[NusaT2T v1](https://huggingface.co/datasets/indonlp/nusa_t2t_v1)|13B|LoRA|2.0 x 10<sup>-5</sup>|
73
+ |[Cendol mT5-small Chat](https://huggingface.co/indonlp/cendol-mt5-small-chat)|[NusaT2T v2](https://huggingface.co/datasets/indonlp/nusa_t2t_v2)|300M|Fully-Finetuned|3.0 x 10<sup>-5</sup>|
74
+ |[Cendol mT5-base Chat](https://huggingface.co/indonlp/cendol-mt5-base-chat)|[NusaT2T v2](https://huggingface.co/datasets/indonlp/nusa_t2t_v2)|580M|Fully-Finetuned|3.0 x 10<sup>-5</sup>|
75
+ |[Cendol mT5-large Chat](https://huggingface.co/indonlp/cendol-mt5-large-chat)|[NusaT2T v2](https://huggingface.co/datasets/indonlp/nusa_t2t_v2)|1.2B|Fully-Finetuned|3.0 x 10<sup>-5</sup>|
76
+ |[Cendol mT5-xl Chat](https://huggingface.co/indonlp/cendol-mt5-xl-chat)|[NusaT2T v2](https://huggingface.co/datasets/indonlp/nusa_t2t_v2)|3.7B|Fully-Finetuned|3.0 x 10<sup>-5</sup>|
77
+ |[Cendol mT5-xxl Chat](https://huggingface.co/indonlp/cendol-mt5-xxl-merged-chat)|[NusaT2T v2](https://huggingface.co/datasets/indonlp/nusa_t2t_v2)|13B|LoRA|2.0 x 10<sup>-4</sup>|
78
+ |[Cendol LLaMA-2 (7B) Chat](https://huggingface.co/indonlp/cendol-llama2-7b-chat)|[NusaT2T v2](https://huggingface.co/datasets/indonlp/nusa_t2t_v2)|7B|Fully-Finetuned|1.0 x 10<sup>-5</sup>|
79
+ |[Cendol LLaMA-2 (13B) Chat](https://huggingface.co/indonlp/cendol-llama2-13b-merged-chat)|[NusaT2T v2](https://huggingface.co/datasets/indonlp/nusa_t2t_v2)|13B|LoRA|2.0 x 10<sup>-4</sup>|
80
+
81
+ **Model Dates** Cendol was trained between October 2023 and January 2024.
82
+
83
+ **License** Use of Cendol is licensed under the [Apache 2.0 license](https://choosealicense.com/licenses/apache-2.0/)
84
+
85
+ **Research Paper** ["Cendol: Open Instruction-tuned Generative Large Language Models for Indonesian Languages"](https://arxiv.org/abs/2404.06138)
86
+
87
+ ## Intended Use
88
+ **Intended Use Cases** Cendol is intended for research use especially on Indonesian languages. Cendol models are intended for a single turn instruction, with Cendol-Instruct models can be used for task-specific instruction, while Cendol-Chat models can be used for general knowledge instruction.
89
+
90
+ **Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English and Indonesian languages. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Cendol.
91
+
92
+ ## Evaluation Results
93
+
94
+ In this section, we report the results for the Cendol models on large-scale NLU and NLG benchmarks. For all the evaluations, we use our internal evaluations library.
95
+
96
+ #### NLU Performance
97
+ <img width="938" alt="NLU Performance" src="https://github.com/IndoNLP/indo-t0/assets/2826602/7656f005-f261-4982-ad06-f18dc57d5e3b">
98
+
99
+ #### NLG Performance
100
+ <img width="940" alt="NLG Performance" src="https://github.com/IndoNLP/indo-t0/assets/2826602/4942caea-35df-44e1-a95b-53a027c6115f">
101
+
102
+ #### Human evaluation
103
+ <img width="456" alt="Human Evaluation" src="https://github.com/IndoNLP/indo-t0/assets/2826602/6128257f-d36c-4dbb-8f6c-4b936bc2ea66">
104
+
105
+
106
+ ## Ethical Considerations and Limitations
107
+ Cendol is a new technology that carries risks with its use. Testing conducted to date has been in Indonesian, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Cendol’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Cendol, developers should perform safety testing and tuning tailored to their specific applications of the model.
108
+
109
+ ## Citation
110
+ If you are using any resources including Cendol models, code, or data, please cite the following articles:
111
+ ```
112
+ @misc{cahyawijaya-etal-2024-cendol,
113
+ title={Cendol: Open Instruction-tuned Generative Large Language Models for Indonesian Languages},
114
+ author={Samuel Cahyawijaya and Holy Lovenia and Fajri Koto and Rifki Afina Putri and Emmanuel Dave and Jhonson Lee and Nuur Shadieq and Wawan Cenggoro and Salsabil Maulana Akbar and Muhammad Ihza Mahendra and Dea Annisayanti Putri and Bryan Wilie and Genta Indra Winata and Alham Fikri Aji and Ayu Purwarianti and Pascale Fung},
115
+ year={2024},
116
+ eprint={2404.06138},
117
+ archivePrefix={arXiv},
118
+ primaryClass={cs.CL}
119
+ }
120
+
121
+ @inproceedings{cahyawijaya-etal-2023-nusacrowd,
122
+ title = "{N}usa{C}rowd: Open Source Initiative for {I}ndonesian {NLP} Resources",
123
+ author = "Cahyawijaya, Samuel and
124
+ Lovenia, Holy and
125
+ Aji, Alham Fikri and
126
+ Winata, Genta and
127
+ Wilie, Bryan and
128
+ Koto, Fajri and
129
+ Mahendra, Rahmad and
130
+ Wibisono, Christian and
131
+ Romadhony, Ade and
132
+ Vincentio, Karissa and
133
+ Santoso, Jennifer and
134
+ Moeljadi, David and
135
+ Wirawan, Cahya and
136
+ Hudi, Frederikus and
137
+ Wicaksono, Muhammad Satrio and
138
+ Parmonangan, Ivan and
139
+ Alfina, Ika and
140
+ Putra, Ilham Firdausi and
141
+ Rahmadani, Samsul and
142
+ Oenang, Yulianti and
143
+ Septiandri, Ali and
144
+ Jaya, James and
145
+ Dhole, Kaustubh and
146
+ Suryani, Arie and
147
+ Putri, Rifki Afina and
148
+ Su, Dan and
149
+ Stevens, Keith and
150
+ Nityasya, Made Nindyatama and
151
+ Adilazuarda, Muhammad and
152
+ Hadiwijaya, Ryan and
153
+ Diandaru, Ryandito and
154
+ Yu, Tiezheng and
155
+ Ghifari, Vito and
156
+ Dai, Wenliang and
157
+ Xu, Yan and
158
+ Damapuspita, Dyah and
159
+ Wibowo, Haryo and
160
+ Tho, Cuk and
161
+ Karo Karo, Ichwanul and
162
+ Fatyanosa, Tirana and
163
+ Ji, Ziwei and
164
+ Neubig, Graham and
165
+ Baldwin, Timothy and
166
+ Ruder, Sebastian and
167
+ Fung, Pascale and
168
+ Sujaini, Herry and
169
+ Sakti, Sakriani and
170
+ Purwarianti, Ayu",
171
+ editor = "Rogers, Anna and
172
+ Boyd-Graber, Jordan and
173
+ Okazaki, Naoaki",
174
+ booktitle = "Findings of the Association for Computational Linguistics: ACL 2023",
175
+ month = jul,
176
+ year = "2023",
177
+ address = "Toronto, Canada",
178
+ publisher = "Association for Computational Linguistics",
179
+ url = "https://aclanthology.org/2023.findings-acl.868",
180
+ doi = "10.18653/v1/2023.findings-acl.868",
181
+ pages = "13745--13818"
182
+ }
183
+ ```
184
+
185
+ Additionally, if you are inspired by our work on region-specific language models especially for Indonesian and its local languages, please also consider citing the following articles:
186
+ ```
187
+ @inproceedings{cahyawijaya-etal-2023-nusawrites,
188
+ title = "{N}usa{W}rites: Constructing High-Quality Corpora for Underrepresented and Extremely Low-Resource Languages",
189
+ author = "Cahyawijaya, Samuel and
190
+ Lovenia, Holy and
191
+ Koto, Fajri and
192
+ Adhista, Dea and
193
+ Dave, Emmanuel and
194
+ Oktavianti, Sarah and
195
+ Akbar, Salsabil and
196
+ Lee, Jhonson and
197
+ Shadieq, Nuur and
198
+ Cenggoro, Tjeng Wawan and
199
+ Linuwih, Hanung and
200
+ Wilie, Bryan and
201
+ Muridan, Galih and
202
+ Winata, Genta and
203
+ Moeljadi, David and
204
+ Aji, Alham Fikri and
205
+ Purwarianti, Ayu and
206
+ Fung, Pascale",
207
+ editor = "Park, Jong C. and
208
+ Arase, Yuki and
209
+ Hu, Baotian and
210
+ Lu, Wei and
211
+ Wijaya, Derry and
212
+ Purwarianti, Ayu and
213
+ Krisnadhi, Adila Alfa",
214
+ booktitle = "Proceedings of the 13th International Joint Conference on Natural Language Processing and the 3rd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)",
215
+ month = nov,
216
+ year = "2023",
217
+ address = "Nusa Dua, Bali",
218
+ publisher = "Association for Computational Linguistics",
219
+ url = "https://aclanthology.org/2023.ijcnlp-main.60",
220
+ doi = "10.18653/v1/2023.ijcnlp-main.60",
221
+ pages = "921--945"
222
+ }
223
+
224
+ @inproceedings{winata-etal-2023-nusax,
225
+ title = "{N}usa{X}: Multilingual Parallel Sentiment Dataset for 10 {I}ndonesian Local Languages",
226
+ author = "Winata, Genta Indra and
227
+ Aji, Alham Fikri and
228
+ Cahyawijaya, Samuel and
229
+ Mahendra, Rahmad and
230
+ Koto, Fajri and
231
+ Romadhony, Ade and
232
+ Kurniawan, Kemal and
233
+ Moeljadi, David and
234
+ Prasojo, Radityo Eko and
235
+ Fung, Pascale and
236
+ Baldwin, Timothy and
237
+ Lau, Jey Han and
238
+ Sennrich, Rico and
239
+ Ruder, Sebastian",
240
+ editor = "Vlachos, Andreas and
241
+ Augenstein, Isabelle",
242
+ booktitle = "Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics",
243
+ month = may,
244
+ year = "2023",
245
+ address = "Dubrovnik, Croatia",
246
+ publisher = "Association for Computational Linguistics",
247
+ url = "https://aclanthology.org/2023.eacl-main.57",
248
+ doi = "10.18653/v1/2023.eacl-main.57",
249
+ pages = "815--834"
250
+ }
251
+
252
+ @inproceedings{aji-etal-2022-one,
253
+ title = "One Country, 700+ Languages: {NLP} Challenges for Underrepresented Languages and Dialects in {I}ndonesia",
254
+ author = "Aji, Alham Fikri and
255
+ Winata, Genta Indra and
256
+ Koto, Fajri and
257
+ Cahyawijaya, Samuel and
258
+ Romadhony, Ade and
259
+ Mahendra, Rahmad and
260
+ Kurniawan, Kemal and
261
+ Moeljadi, David and
262
+ Prasojo, Radityo Eko and
263
+ Baldwin, Timothy and
264
+ Lau, Jey Han and
265
+ Ruder, Sebastian",
266
+ editor = "Muresan, Smaranda and
267
+ Nakov, Preslav and
268
+ Villavicencio, Aline",
269
+ booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
270
+ month = may,
271
+ year = "2022",
272
+ address = "Dublin, Ireland",
273
+ publisher = "Association for Computational Linguistics",
274
+ url = "https://aclanthology.org/2022.acl-long.500",
275
+ doi = "10.18653/v1/2022.acl-long.500",
276
+ pages = "7226--7249"
277
+ }
278
+
279
+ @inproceedings{cahyawijaya-etal-2021-indonlg,
280
+ title = "{I}ndo{NLG}: Benchmark and Resources for Evaluating {I}ndonesian Natural Language Generation",
281
+ author = "Cahyawijaya, Samuel and
282
+ Winata, Genta Indra and
283
+ Wilie, Bryan and
284
+ Vincentio, Karissa and
285
+ Li, Xiaohong and
286
+ Kuncoro, Adhiguna and
287
+ Ruder, Sebastian and
288
+ Lim, Zhi Yuan and
289
+ Bahar, Syafri and
290
+ Khodra, Masayu and
291
+ Purwarianti, Ayu and
292
+ Fung, Pascale",
293
+ editor = "Moens, Marie-Francine and
294
+ Huang, Xuanjing and
295
+ Specia, Lucia and
296
+ Yih, Scott Wen-tau",
297
+ booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
298
+ month = nov,
299
+ year = "2021",
300
+ address = "Online and Punta Cana, Dominican Republic",
301
+ publisher = "Association for Computational Linguistics",
302
+ url = "https://aclanthology.org/2021.emnlp-main.699",
303
+ doi = "10.18653/v1/2021.emnlp-main.699",
304
+ pages = "8875--8898"
305
+ }
306
+
307
+ @inproceedings{wilie-etal-2020-indonlu,
308
+ title = "{I}ndo{NLU}: Benchmark and Resources for Evaluating {I}ndonesian Natural Language Understanding",
309
+ author = "Wilie, Bryan and
310
+ Vincentio, Karissa and
311
+ Winata, Genta Indra and
312
+ Cahyawijaya, Samuel and
313
+ Li, Xiaohong and
314
+ Lim, Zhi Yuan and
315
+ Soleman, Sidik and
316
+ Mahendra, Rahmad and
317
+ Fung, Pascale and
318
+ Bahar, Syafri and
319
+ Purwarianti, Ayu",
320
+ editor = "Wong, Kam-Fai and
321
+ Knight, Kevin and
322
+ Wu, Hua",
323
+ booktitle = "Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing",
324
+ month = dec,
325
+ year = "2020",
326
+ address = "Suzhou, China",
327
+ publisher = "Association for Computational Linguistics",
328
+ url = "https://aclanthology.org/2020.aacl-main.85",
329
+ pages = "843--857"
330
+ }
331
+ ```