Alienmaster commited on
Commit
960bf13
1 Parent(s): 9205d15

First commit

Browse files
Files changed (4) hide show
  1. README.md +49 -0
  2. cleaner.py +15 -0
  3. test.parquet +3 -0
  4. train.parquet +3 -0
README.md ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - de
4
+ configs:
5
+ - config_name: default
6
+ data_files:
7
+ - split: train
8
+ path: "train.parquet"
9
+ - split: test
10
+ path: "test.parquet"
11
+ ---
12
+
13
+ ## Information
14
+ This dataset shows 1785 manually annotated tweets from German politicians during the election year 2021 (01.01.2021 - 31.12.2021).
15
+ The tweets were annotated by 6 academics which were separated into two different groups. So every group of 3 people annotated the sentiment of ~900 tweets. For every tweet, the majority label was built. The annotation result had a moderate Kappa agreement.
16
+
17
+ ## Preprocessing
18
+ The source for this version of the dataset is located [here](https://github.com/NilsHellwig/Twitter_German_Federal_Election_Perception_2021/tree/main/Datasets/Schmidt2022).
19
+ For better processing the line breaks of the texts are removed.
20
+ The numbers for answers, retweets and favorites are removed from the text and also the phrase "Diesen Thread anzeigen" (Show this thread). Both aren't part of the tweet and were most likely added by the crawling tool.
21
+ The preprocessing steps can be reproduced with the `cleaner.py` script.
22
+
23
+ ## Annotation
24
+ The tweets were annotated as follows:
25
+ - 1 if the sentiment of the tweet is positive
26
+ - 2 if the sentiment of the tweet is negative
27
+ - 3 if the sentiment of the tweet is neutral
28
+
29
+ ## Citation
30
+ @inproceedings{schmidt-etal-2022-sentiment,
31
+ title = "Sentiment Analysis on {T}witter for the Major {G}erman Parties during the 2021 {G}erman Federal Election",
32
+ author = "Schmidt, Thomas and
33
+ Fehle, Jakob and
34
+ Weissenbacher, Maximilian and
35
+ Richter, Jonathan and
36
+ Gottschalk, Philipp and
37
+ Wolff, Christian",
38
+ editor = "Schaefer, Robin and
39
+ Bai, Xiaoyu and
40
+ Stede, Manfred and
41
+ Zesch, Torsten",
42
+ booktitle = "Proceedings of the 18th Conference on Natural Language Processing (KONVENS 2022)",
43
+ month = "12--15 " # sep,
44
+ year = "2022",
45
+ address = "Potsdam, Germany",
46
+ publisher = "KONVENS 2022 Organizers",
47
+ url = "https://aclanthology.org/2022.konvens-1.9",
48
+ pages = "74--87",
49
+ }
cleaner.py ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from datasets import load_dataset
2
+ import re
3
+ ds = load_dataset("test.csv")
4
+
5
+ def prepro(datapoint):
6
+ pattern = "\s([0-9]\.){0,1}[0-9]{0,3}(?=\Z)"
7
+ text = datapoint["Embedded_text"].replace("\n", " ")
8
+ for i in range(3):
9
+ text = re.sub(pattern, "", text)
10
+ text = text.replace("Diesen Thread anzeigen", "").replace(" ", " ").replace(" ", " ")
11
+ datapoint["text"] = text
12
+ del datapoint["Embedded_text"]
13
+ return datapoint
14
+ ds_f = ds.map(prepro)
15
+ ds_f.to_parquet("test.parquet")
test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:23a020bcbe3c949bbebbafff63c893d8c7e6ca3d2a1a8a9938018738d96a7e22
3
+ size 81332
train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0c233718d6d36a69dbd992df0f0e4c30b67fd35e6b4404970dce9be1c296ec77
3
+ size 326319