File size: 1,753 Bytes
133987c 6e7a8f3 133987c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 |
---
language:
- tl
tags:
- Tagalog
- Taglish
- gpt2
---
## Model Description
As part of the ITANONG project's 10 billion-token Tagalog dataset, we have introduced our initial pre-trained language models for Philippine languages. Our model suite encompasses various BERT-based, GPT-based, and Sentence Transformers tailored for Tagalog,Taglish and Cebuano.
## Training Details
This model was trained using an Nvidia V100-32GB GPU on DOST-ASTI Computing and Archiving Research Environment (COARE) - https://asti.dost.gov.ph/projects/coare/
### Training Data
The training dataset was compiled from both formal and informal sources, consisting of 5,159,917 instances from formal channels and 3,057,180 from informal sources. More information on pre-processing and training parameters on our paper
## Citation
Paper : iTANONG-DS : A Collection of Benchmark Datasets for Downstream Natural Language Processing Tasks on Select Philippine Language
Bibtex:
```
@inproceedings{visperas-etal-2023-itanong,
title = "i{TANONG}-{DS} : A Collection of Benchmark Datasets for Downstream Natural Language Processing Tasks on Select {P}hilippine Languages",
author = "Visperas, Moses L. and
Borjal, Christalline Joie and
Adoptante, Aunhel John M and
Abacial, Danielle Shine R. and
Decano, Ma. Miciella and
Peramo, Elmer C",
editor = "Abbas, Mourad and
Freihat, Abed Alhakim",
booktitle = "Proceedings of the 6th International Conference on Natural Language and Speech Processing (ICNLSP 2023)",
month = dec,
year = "2023",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.icnlsp-1.34",
pages = "316--323",
}
``` |