rasoultilburg
commited on
Commit
•
0737cb4
1
Parent(s):
a28d3fe
Update README.md
Browse files
README.md
CHANGED
@@ -1,6 +1,8 @@
|
|
1 |
---
|
2 |
license: gpl-3.0
|
3 |
---
|
|
|
|
|
4 |
We manually curated a dataset aimed at understanding the use of causal language in social science literature based on The Cooperation Databank, a comprehensive collection of all papers dedicated to game theory applications within social science (Spadaro et al. 2022). From this dataset, 2,590 articles were converted into raw text using the Grobid library in Python and subsequently segmented at the sentence level (Lopez 2009). Following conversion, a post-processing stage corrected common errors arising during PDF-to-text translation. These errors typically involve misinterpretations of similar characters, such as “0” and “O”, “b” and “6” or incorrect joining or splitting of letters. One of the authors (RN) labeled sentences using the Doccano web annotation tool (Nakayama et al. 2018). Sentences were cataloged as either causal, non-causal, or ambiguous. Instances marked as ambiguous (117 of 1058 sentences; 11.05%) were subsequently reviewed by all authors. Inter-rater agreement was estimated using Fleiss’ Kappa index (Fleiss 1971), resulting in Kappa = 0.76, denoting a “substantial” agreement. A majority voting method was employed to finalize the labels for samples where consensus was elusive. Ultimately, a number of 529 causal and 529 non-causal sentences were curated. This dataset is balanced, and we divided it into 70 percent for training, 10 percent for validation, and 20 percent for testing.
|
5 |
|
6 |
|
|
|
1 |
---
|
2 |
license: gpl-3.0
|
3 |
---
|
4 |
+
# Social Science Causal and non-causal claims dataset!
|
5 |
+
|
6 |
We manually curated a dataset aimed at understanding the use of causal language in social science literature based on The Cooperation Databank, a comprehensive collection of all papers dedicated to game theory applications within social science (Spadaro et al. 2022). From this dataset, 2,590 articles were converted into raw text using the Grobid library in Python and subsequently segmented at the sentence level (Lopez 2009). Following conversion, a post-processing stage corrected common errors arising during PDF-to-text translation. These errors typically involve misinterpretations of similar characters, such as “0” and “O”, “b” and “6” or incorrect joining or splitting of letters. One of the authors (RN) labeled sentences using the Doccano web annotation tool (Nakayama et al. 2018). Sentences were cataloged as either causal, non-causal, or ambiguous. Instances marked as ambiguous (117 of 1058 sentences; 11.05%) were subsequently reviewed by all authors. Inter-rater agreement was estimated using Fleiss’ Kappa index (Fleiss 1971), resulting in Kappa = 0.76, denoting a “substantial” agreement. A majority voting method was employed to finalize the labels for samples where consensus was elusive. Ultimately, a number of 529 causal and 529 non-causal sentences were curated. This dataset is balanced, and we divided it into 70 percent for training, 10 percent for validation, and 20 percent for testing.
|
7 |
|
8 |
|