File size: 6,182 Bytes
00fd817
 
5b5b7f7
00fd817
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
683cbf3
00fd817
 
683cbf3
00fd817
 
683cbf3
00fd817
a6bce5d
683cbf3
5b5b7f7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
683cbf3
5b5b7f7
 
683cbf3
5b5b7f7
 
683cbf3
5b5b7f7
 
683cbf3
cf56da6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
00fd817
 
 
 
 
 
 
 
 
5b5b7f7
 
 
 
 
 
 
 
cf56da6
 
 
 
683cbf3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
00fd817
683cbf3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
---
dataset_info:
- config_name: bias_prediction
  features:
  - name: file
    dtype: string
  - name: id_sente
    dtype: string
  - name: id_article
    dtype: string
  - name: domain
    dtype: string
  - name: year
    dtype: string
  - name: sentences
    dtype: string
  - name: label
    dtype: int64
  - name: label_text
    dtype: string
  splits:
  - name: train
    num_bytes: 163041
    num_examples: 738
  - name: full_train
    num_bytes: 951010
    num_examples: 4403
  - name: test
    num_bytes: 384327
    num_examples: 1788
  download_size: 718605
  dataset_size: 1498378
- config_name: factuality_prediction
  features:
  - name: file
    dtype: string
  - name: id_sente
    dtype: string
  - name: id_article
    dtype: string
  - name: domain
    dtype: string
  - name: year
    dtype: string
  - name: sentences
    dtype: string
  - name: label
    dtype: int64
  - name: label_text
    dtype: string
  splits:
  - name: train
    num_bytes: 606722
    num_examples: 2826
  - name: full_train
    num_bytes: 944929
    num_examples: 4403
  - name: test
    num_bytes: 381863
    num_examples: 1788
  download_size: 927856
  dataset_size: 1933514
- config_name: original
  features:
  - name: file
    dtype: string
  - name: id_sente
    dtype: string
  - name: id_article
    dtype: string
  - name: domain
    dtype: string
  - name: year
    dtype: string
  - name: sentences
    dtype: string
  - name: classe
    dtype: int64
  - name: label_text
    dtype: string
  splits:
  - name: train
    num_bytes: 1317047
    num_examples: 6191
  download_size: 516651
  dataset_size: 1317047
configs:
- config_name: bias_prediction
  data_files:
  - split: train
    path: bias_prediction/train-*
  - split: full_train
    path: bias_prediction/full_train-*
  - split: test
    path: bias_prediction/test-*
- config_name: factuality_prediction
  data_files:
  - split: train
    path: factuality_prediction/train-*
  - split: full_train
    path: factuality_prediction/full_train-*
  - split: test
    path: factuality_prediction/test-*
- config_name: original
  data_files:
  - split: train
    path: original/train-*
license: unknown
task_categories:
- text-classification
language:
- pt
- por
pretty_name: FactNews
size_categories:
  - 1K<n<10K
multilinguality:
  - monolingual
language_creators:
  - found
annotations_creators:
  - expert-generated
tags:
  - subjectivity
  - mediabias
  - media-bias
---

## Disclaimer

*I am not the author of this dataset, this is a modified version of the FactCheck dataset on HuggingFace, the original data is made avaliable by Vargas et. al, 2023 and can be downloaded from the link: https://github.com/franciellevargas/FactNews*

*Modifications:*
- *The "original" subset contains the unmodified original CSV*
- *The subsets for the task of "bias_prediction" and "factuality_prediction" were splited between train (70%) AND test (30%) by randomly selecting
  sentences grouped by their id_article. This configuration difers from the authors, who made a 90%/10% 10-fold split on the papers.*
- *Each task contains an unbalanced split (full-train) and the balanced-split (train)*

# Sentence-Level Annotated Dataset for Predicting Factuality of News and Bias of Media Outlets in Portuguese

Automated fact-checking and news credibility verification at scale require accurate prediction of news factuality and media bias. 
Here, we introduce a large sentence-level dataset, titled FactNews, composed of 6,191 sentences expertly annotated according to factuality 
and media bias definitions proposed by AllSides. We used the FactNews to assess the overall reliability of news sources by formulating two 
text classification problems for predicting sentence-level factuality of news reporting and bias of media outlets. Our experiments demonstrate 
that biased sentences present a higher number of words compared to factual sentences, besides having a predominance of emotions. Hence, 
the fine-grained analysis of subjectivity and impartiality of news articles showed promising results for predicting the reliability of the 
entire media outlet. Finally, due to the severity of fake news and political polarization in Brazil, and the lack of research for Portuguese, 
both dataset and baseline were proposed for Brazilian Portuguese. The following table describes in detail the FactNews labels, documents, and stories:

| Factual| Quotes | Biased | Total sentences | Total news stories | Total news documents |
| :---   | :---:  |   ---: |            ---: |               ---: |                  ---: |
| 4,242  | 1,391  | 558    | 6,161           | 100                | 300                   |

### Sources:
  - Media 1: Folha de São Paulo
  - Media 2: Estadão
  - Media 3: O Globo 

### Paper Results:

Sentence-Level Media Bias Prediction (90%/10% 10-fold split)
 - 67% (F1-Score) by Fine-tuned mBert-case

Sentence-Level Factuality Prediction (90%/10% 10-fold split)
- 88% (F1-Score) by Fine-tuned mBert-case


## Citation

```
Vargas, F., Jaidka, K., Pardo, T.A.S., Benevenuto, F. (2023). Predicting Sentence-Level Factuality of News and Bias of Media Outlets. Proceedings of the 14th International Conference on Recent Advances in Natural Language Processing, pp.1197--1206. Varna, Bulgaria. Association for Computational Linguistics (ACL).

```

**Bibtex**
```
@inproceedings{vargas-etal-2023-predicting,
    title = "Predicting Sentence-Level Factuality of News and Bias of Media Outlets",
    author = "Vargas, Francielle  and
      Jaidka, Kokil  and
      Pardo, Thiago  and
      Benevenuto, Fabr{\'\i}cio",
    editor = "Mitkov, Ruslan  and
      Angelova, Galia",
    booktitle = "Proceedings of the 14th International Conference on Recent Advances in Natural Language Processing",
    month = sep,
    year = "2023",
    address = "Varna, Bulgaria",
    publisher = "INCOMA Ltd., Shoumen, Bulgaria",
    url = "https://aclanthology.org/2023.ranlp-1.127",
    pages = "1197--1206",
}
```
## Dataset Description

- **Homepage:** https://github.com/franciellevargas/FactNews
- **Paper:** [Predicting Sentence-Level Factuality of News and Bias of Media Outlets](https://aclanthology.org/2023.ranlp-1.127)