FelipeGuerra
commited on
Commit
•
175fcfa
1
Parent(s):
cc78eac
Update README.md
Browse files
README.md
CHANGED
@@ -25,8 +25,14 @@ This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-bne](https://h
|
|
25 |
|
26 |
## Training and evaluation data
|
27 |
|
28 |
-
The dataset used was a small one, consisting of 3570 tweets, which were manually labeled as cyberbullying or not cyberbullying.
|
29 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
30 |
The tweets were labeled by an occupational therapist associated with the project.
|
31 |
|
32 |
## Training procedure
|
|
|
25 |
|
26 |
## Training and evaluation data
|
27 |
|
28 |
+
The dataset used was a small one, consisting of 3570 tweets, which were manually labeled as cyberbullying or not cyberbullying. A distinguishing feature of this dataset is that for a given word, there is an annotated tweet labeled as cyberbullying that contains that word, and another tweet labeled as not cyberbullying with the same word. This is made possible because the context in which the same word is used can vary, leading to tweets being classified differently.
|
29 |
+
|
30 |
+
For instance, tweets in the not cyberbullying category predominantly contain obscene words that, in their particular context, do not correspond with cyberbullying. An example is “Marica, se me olvidó ver el partido”. Additionally, the not cyberbullying category, to a lesser extent, includes tweets sourced from trends in the Colombian region. Twitter trends reflect the most popular topics and conversations in a given area at a specific time, essentially capturing what people are discussing and sharing online in that geographical locale.
|
31 |
+
|
32 |
+
Trend-based tweets were utilized for those instances where it was not feasible to obtain not cyberbullying tweets containing a specific offensive word or phrase, such as “ojala te violen”. Conversely, tweets labeled as cyberbullying might not always contain words or phrases that are deemed strong or obscene, like in the example “te voy a buscar”.
|
33 |
+
|
34 |
+
|
35 |
+
The distribution of cyberbullying tweets and non-cyberbullying tweets was the same. The keywords and phrases used in the creation of the dataset were selected based on the categories provided in the article [Guidelines for the Fine-Grained Analysis of Cyberbullying](https://lt3.ugent.be/media/uploads/publications/2015/Guidelines_Cyberbullying_TechnicalReport_1.pdf) authored by Cynthia Van Hee, Ben Verhoeven, Els Lefever, Guy De Pauw, Walter Daelemans, and Véronique Hoste. Four categories were included: insult, threat, curse, and defamation. The insult category involves the use of offensive words intended to verbally hurt another person, while threat aims to harm the victim's integrity. Curse includes words that wish harm or misfortune upon a person, and defamation seeks to damage the victim’s reputation. These categories were chosen to capture a broad representation of the forms in which cyberbullying can manifest.
|
36 |
The tweets were labeled by an occupational therapist associated with the project.
|
37 |
|
38 |
## Training procedure
|