Update README.md
Browse files
README.md
CHANGED
@@ -64,18 +64,20 @@ Do consider the biases which come from both the pre-trained RoBERTa model and th
|
|
64 |
|
65 |
Sundanese RoBERTa Base Emotion Classifier was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on Google Colaboratory using their free GPU access.
|
66 |
|
67 |
-
##
|
68 |
-
|
69 |
-
```
|
70 |
-
@
|
71 |
-
|
72 |
-
|
73 |
-
|
74 |
-
|
75 |
-
|
76 |
-
|
77 |
-
|
78 |
-
|
79 |
-
|
|
|
|
|
80 |
}
|
81 |
-
```
|
|
|
64 |
|
65 |
Sundanese RoBERTa Base Emotion Classifier was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on Google Colaboratory using their free GPU access.
|
66 |
|
67 |
+
## Citation Information
|
68 |
+
|
69 |
+
```bib
|
70 |
+
@article{rs-907893,
|
71 |
+
author = {Wongso, Wilson
|
72 |
+
and Lucky, Henry
|
73 |
+
and Suhartono, Derwin},
|
74 |
+
journal = {Journal of Big Data},
|
75 |
+
year = {2022},
|
76 |
+
month = {Feb},
|
77 |
+
day = {26},
|
78 |
+
abstract = {The Sundanese language has over 32 million speakers worldwide, but the language has reaped little to no benefits from the recent advances in natural language understanding. Like other low-resource languages, the only alternative is to fine-tune existing multilingual models. In this paper, we pre-trained three monolingual Transformer-based language models on Sundanese data. When evaluated on a downstream text classification task, we found that most of our monolingual models outperformed larger multilingual models despite the smaller overall pre-training data. In the subsequent analyses, our models benefited strongly from the Sundanese pre-training corpus size and do not exhibit socially biased behavior. We released our models for other researchers and practitioners to use.},
|
79 |
+
issn = {2693-5015},
|
80 |
+
doi = {10.21203/rs.3.rs-907893/v1},
|
81 |
+
url = {https://doi.org/10.21203/rs.3.rs-907893/v1}
|
82 |
}
|
83 |
+
```
|