Update README.md
Browse files
README.md
CHANGED
@@ -66,5 +66,27 @@ BanglaClickBERT is a BERT-based model with 12 layers. It utilizes the foundation
|
|
66 |
If you use this model, please cite the following paper:
|
67 |
|
68 |
```
|
69 |
-
@inproceedings{
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
70 |
```
|
|
|
66 |
If you use this model, please cite the following paper:
|
67 |
|
68 |
```
|
69 |
+
@inproceedings{joy-etal-2023-banglaclickbert,
|
70 |
+
title = "{B}angla{C}lick{BERT}: {B}angla Clickbait Detection from News Headlines using Domain Adaptive {B}angla{BERT} and {MLP} Techniques",
|
71 |
+
author = "Joy, Saman Sarker and
|
72 |
+
Aishi, Tanusree Das and
|
73 |
+
Nodi, Naima Tahsin and
|
74 |
+
Rasel, Annajiat Alim",
|
75 |
+
editor = "Muresan, Smaranda and
|
76 |
+
Chen, Vivian and
|
77 |
+
Casey, Kennington and
|
78 |
+
David, Vandyke and
|
79 |
+
Nina, Dethlefs and
|
80 |
+
Koji, Inoue and
|
81 |
+
Erik, Ekstedt and
|
82 |
+
Stefan, Ultes",
|
83 |
+
booktitle = "Proceedings of the 21st Annual Workshop of the Australasian Language Technology Association",
|
84 |
+
month = nov,
|
85 |
+
year = "2023",
|
86 |
+
address = "Melbourne, Australia",
|
87 |
+
publisher = "Association for Computational Linguistics",
|
88 |
+
url = "https://aclanthology.org/2023.alta-1.1",
|
89 |
+
pages = "1--10",
|
90 |
+
abstract = "News headlines or titles that deliberately persuade readers to view a particular online content are referred to as clickbait. There have been numerous studies focused on clickbait detection in English language, compared to that, there have been very few researches carried out that address clickbait detection in Bangla news headlines. In this study, we have experimented with several distinctive transformers models, namely BanglaBERT and XLM-RoBERTa. Additionally, we introduced a domain-adaptive pretrained model, BanglaClickBERT. We conducted a series of experiments to identify the most effective model. The dataset we used for this study contained 15,056 labeled and 65,406 unlabeled news headlines; in addition to that, we have collected more unlabeled Bangla news headlines by scraping clickbait-dense websites making a total of 1 million unlabeled news headlines in order to make our BanglaClickBERT. Our approach has successfully surpassed the performance of existing state-of-the-art technologies providing a more accurate and efficient solution for detecting clickbait in Bangla news headlines, with potential implications for improving online content quality and user experience.",
|
91 |
+
}
|
92 |
```
|