73anthony commited on
Commit
a50b156
·
verified ·
1 Parent(s): 33492e6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +27 -3
README.md CHANGED
@@ -1,3 +1,27 @@
1
- ---
2
- license: cc-by-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ tags:
4
+ - text
5
+ - classification
6
+ - politics
7
+ - extremism
8
+ - ml
9
+ - advocacy
10
+ size_categories:
11
+ - 1K<n<10K
12
+ ---
13
+ **We would like to thank Assistant Professor Leilani H. Gilpin (UC Santa Cruz) and the AIEA Lab for their guidance and support in the development of this dataset.** —*Aditya Suresh, Anthony Lu, Vishnu Iyer*
14
+
15
+
16
+ **About this data:** Social media has seen an increasing rise in the quantity and intensity of extremist content throughout various different services. With cases such as the various different white supremacist movements across the world, recruitment for terrorist organizations through affiliated accounts, and a general sense of hate emerging through the modern era of polarization, it becomes increasingly vital to be able to recognize these patterns and adequately combat the harms of extremism digitally on a global scale.
17
+
18
+ **Citations:** Our dataset would not have been possible without the aid of an already preexisting dataset found on Kaggle, Version 1 of "Hate Speech Detection curated Dataset🤬" by Alban Nyantudre in 2023. The link can be found here: [https://www.kaggle.com/datasets/waalbannyantudre/hate-speech-detection-curated-dataset/data](url). Accessed in 2025, it was truly essential to our work. With over 400,000 messages of real, cleaned posts, we would not have been able to source and label our data points without this crucial resource.
19
+
20
+ **Classification:** Our team hand labelled nearly 4,000 pieces of data from our sourced database of posts, filtering every on of them into a blanket tag of "EXTREMIST" and "NON_EXTREMIST." As many messages digitally utilize context in order to spread harmful rhetoric, we followed a general rule of classifying terms as extremist so long as they "provoked harm to a person or a group of people, whether it be through advocacy for violence, discrimination, or other hurtful sentiments, based off of a characteristic of the group."
21
+
22
+ **Value of the data:** This dataset can be utilized to create extremist sentiment analysis systems and machine learning algorithms, as it reflects on current linguistics, as stated by the source material for the data points themselves. In addition, it can be used as a benchmark for comparing with other extremism datasets and other extremist sentiment analysis systems.
23
+
24
+ **Potential Errors:** Although we feel very confident in our own labeling ability, a possibility of potentially wrong data points does exist due to the fact that these data points lack quantifiable identifiers and as such human errors are possible within the data. We do not believe this to occur often, but in full transparency is an issue that we endeavor to resolve in subsequent updates.
25
+ ---
26
+ license: cc-by-4.0
27
+ ---