tommasoc davanstrien HF staff commited on
Commit
a4c0dc4
1 Parent(s): 369d5c9

fix typos (#1)

Browse files

- fix typos (a3e99afd9804a7fe62900fb027140131795f6a2a)


Co-authored-by: Daniel van Strien <davanstrien@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -5,15 +5,15 @@ language:
5
  pipeline_tag: text-classification
6
  ---
7
 
8
- Fine-tuned model for detecting instances of offensive language in Ducth tweets. The model has been trained with [DALC v2.0 ](https://github.com/tommasoc80/DALC).
9
 
10
- Offensive language defintion is inhereted from SemEval 2019 OffensEval: "Posts containing any form of non-acceptable language (profanity) or a targeted offence,
11
  which can be veiled or direct. This includes insults, threats, and posts containing profane language or swear words." ([Zampieri et al., 2019](https://aclanthology.org/N19-1144/))
12
 
13
- The model achieve the following results on multiple test data:
14
 
15
  - DALC held-out test set: macro F1: 79.93; F1 Offensive: 70.34
16
  - HateCheck-NL (functional benchmark for hate speech): Accuracy: 61.40; Accuracy non-hateful tests: 47.61 ; Accuracy hateful tests: 68.86
17
- - OP-NL (dynamyc benchmark for offensive language): macro F1: 73.56
18
 
19
- More details on the training settings and pre-processind are available [here](https://github.com/tommasoc80/DALC)
 
5
  pipeline_tag: text-classification
6
  ---
7
 
8
+ Fine-tuned model for detecting instances of offensive language in Dutch tweets. The model has been trained with [DALC v2.0 ](https://github.com/tommasoc80/DALC).
9
 
10
+ Offensive language definition is inherited from SemEval 2019 OffensEval: "Posts containing any form of non-acceptable language (profanity) or a targeted offence,
11
  which can be veiled or direct. This includes insults, threats, and posts containing profane language or swear words." ([Zampieri et al., 2019](https://aclanthology.org/N19-1144/))
12
 
13
+ The model achieves the following results on multiple test data:
14
 
15
  - DALC held-out test set: macro F1: 79.93; F1 Offensive: 70.34
16
  - HateCheck-NL (functional benchmark for hate speech): Accuracy: 61.40; Accuracy non-hateful tests: 47.61 ; Accuracy hateful tests: 68.86
17
+ - OP-NL (dynamic benchmark for offensive language): macro F1: 73.56
18
 
19
+ More details on the training settings and pre-processing are available [here](https://github.com/tommasoc80/DALC)