horychtom commited on
Commit
6a2bda3
1 Parent(s): 7ad87e0

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +49 -0
README.md ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ language:
4
+ - multilingual
5
+ base_model:
6
+ - FacebookAI/xlm-roberta-base
7
+ pipeline_tag: text-classification
8
+ ---
9
+
10
+ This is a model pre-trained on LBM (Large Bias Mixture) collection of 59 bias-related tasks in via multi-task learning.
11
+ ---
12
+
13
+ ## Citation
14
+
15
+ **Code repository**: https://github.com/Media-Bias-Group/magpie-multi-task
16
+
17
+ **Paper**: https://aclanthology.org/2024.lrec-main.952/
18
+
19
+
20
+ If you use this model, please cite the following paper(s):
21
+
22
+ ```bibtex
23
+ @inproceedings{horych-etal-2024-magpie,
24
+ title = "{MAGPIE}: Multi-Task Analysis of Media-Bias Generalization with Pre-Trained Identification of Expressions",
25
+ author = "Horych, Tom{\'a}{\v{s}} and
26
+ Wessel, Martin Paul and
27
+ Wahle, Jan Philip and
28
+ Ruas, Terry and
29
+ Wa{\ss}muth, Jerome and
30
+ Greiner-Petter, Andr{\'e} and
31
+ Aizawa, Akiko and
32
+ Gipp, Bela and
33
+ Spinde, Timo",
34
+ editor = "Calzolari, Nicoletta and
35
+ Kan, Min-Yen and
36
+ Hoste, Veronique and
37
+ Lenci, Alessandro and
38
+ Sakti, Sakriani and
39
+ Xue, Nianwen",
40
+ booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)",
41
+ month = may,
42
+ year = "2024",
43
+ address = "Torino, Italia",
44
+ publisher = "ELRA and ICCL",
45
+ url = "https://aclanthology.org/2024.lrec-main.952",
46
+ pages = "10903--10920",
47
+ abstract = "Media bias detection poses a complex, multifaceted problem traditionally tackled using single-task models and small in-domain datasets, consequently lacking generalizability. To address this, we introduce MAGPIE, a large-scale multi-task pre-training approach explicitly tailored for media bias detection. To enable large-scale pre-training, we construct Large Bias Mixture (LBM), a compilation of 59 bias-related tasks. MAGPIE outperforms previous approaches in media bias detection on the Bias Annotation By Experts (BABE) dataset, with a relative improvement of 3.3{\%} F1-score. Furthermore, using a RoBERTa encoder, we show that MAGPIE needs only 15{\%} of fine-tuning steps compared to single-task approaches. We provide insight into task learning interference and show that sentiment analysis and emotion detection help learning of all other tasks, and scaling the number of tasks leads to the best results. MAGPIE confirms that MTL is a promising approach for addressing media bias detection, enhancing the accuracy and efficiency of existing models. Furthermore, LBM is the first available resource collection focused on media bias MTL.",
48
+ }
49
+ ```