md-nishat-008 commited on
Commit
81dd197
1 Parent(s): 143c57b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +56 -0
README.md CHANGED
@@ -1,3 +1,59 @@
1
  ---
2
  license: cc-by-nc-nd-4.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-nc-nd-4.0
3
  ---
4
+
5
+ # Code-Mixed-Offensive-Language-Identification
6
+ This is a dataset for the offensive language detection task. It contains 100k code mixed data. The languages are Bangla-English-Hindi.
7
+
8
+ ### Dataset Generation:
9
+
10
+ Initially, the labelling schema of OLID[^1] and SOLID[^2] serves as the seed data, from which we randomly select 100,000 data instances. The labels in this dataset are categorized as Non-Offensive and Offensive for the purpose of our task. We meticulously ensure an equal number of instances for both Non-Offensive and Offensive labels. To synthesize the Code-mixed dataset, we employ two distinct methodologies: the *Random Code-mixing Algorithm* by Krishnan et al. (2021)[^3] and *r-CM* by Santy et al. (2021)[^4].
11
+
12
+ ### Class Distribution:
13
+
14
+ #### For train.csv:
15
+
16
+ | Label | Count | Percentage |
17
+ |-------|-------|------------|
18
+ | NOT | 40018 | 66.70% |
19
+ | OFF | 19982 | 33.30% |
20
+
21
+ #### For dev.csv:
22
+
23
+ | Label | Count | Percentage |
24
+ |-------|-------|------------|
25
+ | NOT | 13339 | 66.70% |
26
+ | OFF | 6661 | 33.30% |
27
+
28
+ #### For test.csv:
29
+
30
+ | Label | Count | Percentage |
31
+ |-------|-------|------------|
32
+ | NOT | 13340 | 66.70% |
33
+ | OFF | 6660 | 33.30% |
34
+
35
+ ### Cite our Paper:
36
+
37
+ If you utilize this dataset, please cite our paper.
38
+
39
+ ```bibtex
40
+ @article{raihan2023mixed,
41
+ title={Mixed-Distil-BERT: Code-mixed Language Modeling for Bangla, English, and Hindi},
42
+ author={Raihan, Md Nishat and Goswami, Dhiman and Mahmud, Antara},
43
+ journal={arXiv preprint arXiv:2309.10272},
44
+ year={2023}
45
+ }
46
+ ```
47
+
48
+ ### References
49
+
50
+ [^1]: Zampieri, M., Malmasi, S., Nakov, P., Rosenthal, S., Farra, N., & Kumar, R. (2019). SemEval-2019 Task 6: Identifying and Categorizing Offensive Language in Social Media (OffensEval). In Proceedings of the 13th International Workshop on Semantic Evaluation (pp. 75–86). [https://aclanthology.org/S19-2010](https://aclanthology.org/S19-2010)
51
+
52
+ [^2]: Rosenthal, S., Atanasova, P., Karadzhov, G., Zampieri, M., & Nakov, P. (2021). SOLID: A Large-Scale Semi-Supervised Dataset for Offensive Language Identification. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 (pp. 915–928). [https://aclanthology.org/2021.findings-acl.80](https://aclanthology.org/2021.findings-acl.80)
53
+
54
+ [^3]: Krishnan, J., Anastasopoulos, A., Purohit, H., & Rangwala, H. (2021). Multilingual code-switching for zero-shot cross-lingual intent prediction and slot filling. arXiv preprint arXiv:2103.07792.
55
+
56
+ [^4]: Santy, S., Srinivasan, A., & Choudhury, M. (2021). BERTologiCoMix: How does code-mixing interact with multilingual BERT? In Proceedings of the Second Workshop on Domain Adaptation for NLP (pp. 111–121).
57
+
58
+ ---
59
+