dainis-boumber commited on
Commit
8689310
1 Parent(s): c8bc52c

Delete political_statements/README.md

Browse files
Files changed (1) hide show
  1. political_statements/README.md +0 -42
political_statements/README.md DELETED
@@ -1,42 +0,0 @@
1
- # LIAR to Political Statements 2.0
2
-
3
- This notebook contains the code for alternative conversion(s) of the LIAR dataset to the Political Statements dattaset.
4
-
5
- ## Labeling
6
-
7
- The primary difference is the change in the re-labeling scheme when converting the task from multiclass to binary.
8
-
9
- ### Old scheme
10
-
11
- We use the claim field as the text and map labels “pants-fire,” “false,”
12
- “barely-true,” to deceptive and “half-true,” “mostly-true,” and “true”
13
- to non-deceptive, resulting in 5,669 deceptive and 7,167 truthful
14
- statements.
15
-
16
- ### New scheme
17
-
18
- Following
19
-
20
- *Upadhayay, B., Behzadan, V.: "Sentimental liar: Extended corpus and deep learning models for fake claim classification" (2020)*
21
-
22
- and
23
-
24
- *Shahriar, Sadat, Arjun Mukherjee, and Omprakash Gnawali. "Deception Detection with Feature-Augmentation by Soft Domain Transfer." International Conference on Social Informatics. Cham: Springer International Publishing, 2022.*
25
-
26
- we map the labels map labels “pants-fire,” “false,”
27
- “barely-true,” **and “half-true,”** to deceptive; the labels "mostly-true" and "true" are mapped to non-deceptive. The statements that are only half-true are now considered to be deceptive, making the criterion for statement being non-deceptive stricter -- now 2 out of 6 labels map to non-deceptive and 4 map to deceptive.
28
-
29
- ## Cleaning
30
-
31
- The dataset has been cleaned using cleanlab with visual inspection of problems found. Partial sentences, such as "On Iran nuclear deal", "On inflation", were removed. Text with large number of errors induced by a parser were also removed. Statements in language other than English (namely, Spanish) were also removed. Sequences with unicode errors, containing less than one characters or over 1 million characters were removed.
32
-
33
- ## Preprocessing
34
-
35
- Whitespace, quotes, bulletpoints, unicode is normalized.
36
-
37
- ## Data
38
-
39
- The dataset consists of "text" (string) and "is_deceptive" (1,0). 1 means the text is deceptive, 0 indicates otherwise.
40
-
41
- There are 12497 samples in the dataset, contained in `political_statements.jsonl`. For reproduceability, the data is also split into training, test, and validation sets in 80/10/10 ratio. They are named `train.jsonl`, `test.jsonl`, `valid.jsonl`. The sampling process was stratified. The training set contains 9997 samples, the validation and the test sets have 1250 samles each in them.
42
-