--- language: - tl tags: - Tagalog - Taglish --- ## Model Description As part of the ITANONG project's 10 billion-token Tagalog dataset, we have introduced our initial pre-trained language models for Philippine languages. Our model suite encompasses various BERT-based, GPT-based, and Sentence Transformers tailored for Tagalog,Taglish and Cebuano. ## Training Details This model was trained using an Nvidia V100-32GB GPU on DOST-ASTI Computing and Archiving Research Environment (COARE) - https://asti.dost.gov.ph/projects/coare/ ### Training Data The training dataset was compiled from both formal and informal sources, consisting of 5,159,917 instances from formal channels and 3,057,180 from informal sources. More information on pre-processing and training parameters on our paper ## Citation Paper : iTANONG-DS : A Collection of Benchmark Datasets for Downstream Natural Language Processing Tasks on Select Philippine Language Bibtex: ``` @inproceedings{2023itanongds, title={{iTANONG-DS: A Collection of Benchmark Datasets for Downstream Natural Language Processing Tasks on Select Philippine Languages}}, author={Visperas, M. and Borjal, C. J. and Adoptante, A. J. and Peramo, E. and Abacial, D. S. and Decano, M. M.}, booktitle={2023 International Conference on Natural Language and Speech Processing}, year={2023}, address={Trento, Italy}, } ```