Update README.md
Browse files
README.md
CHANGED
@@ -24,7 +24,7 @@ tags:
|
|
24 |
## Model Description
|
25 |
|
26 |
This model consists of a fine-tuned version of BgGPT-7B-Instruct-v0.2 for a propaganda detection task. It is effectively a multilabel classifier, determining wether a given propaganda text in Bulgarian contains or not 5 predefined propaganda types.
|
27 |
-
This model was created by [`Identrics`](https://identrics.ai/), in the scope of the
|
28 |
|
29 |
|
30 |
## Propaganda taxonomy
|
@@ -82,4 +82,17 @@ The data is carefully classified by domain experts based on our predetermined ta
|
|
82 |
|
83 |
|
84 |
|
85 |
-
The model was then tested on a smaller evaluation dataset, achieving an
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
24 |
## Model Description
|
25 |
|
26 |
This model consists of a fine-tuned version of BgGPT-7B-Instruct-v0.2 for a propaganda detection task. It is effectively a multilabel classifier, determining wether a given propaganda text in Bulgarian contains or not 5 predefined propaganda types.
|
27 |
+
This model was created by [`Identrics`](https://identrics.ai/), in the scope of the WASPer project.
|
28 |
|
29 |
|
30 |
## Propaganda taxonomy
|
|
|
82 |
|
83 |
|
84 |
|
85 |
+
The model was then tested on a smaller evaluation dataset, achieving an F1 score of
|
86 |
+
|
87 |
+
## Citation
|
88 |
+
|
89 |
+
If you find our work useful, please consider citing WASPer:
|
90 |
+
|
91 |
+
```
|
92 |
+
@article{bai2024longwriter,
|
93 |
+
title={LongWriter: Unleashing 10,000+ Word Generation from Long Context LLMs},
|
94 |
+
author={Yushi Bai and Jiajie Zhang and Xin Lv and Linzhi Zheng and Siqi Zhu and Lei Hou and Yuxiao Dong and Jie Tang and Juanzi Li},
|
95 |
+
journal={arXiv preprint arXiv:2408.07055},
|
96 |
+
year={2024}
|
97 |
+
}
|
98 |
+
```
|